Automating sensitivity analysis of computer models using computer calculus
International Nuclear Information System (INIS)
Oblow, E.M.; Pin, F.G.
1986-01-01
An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies
Ferrofluids: Modeling, numerical analysis, and scientific computation
Tomas, Ignacio
This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a
Automating sensitivity analysis of computer models using computer calculus
International Nuclear Information System (INIS)
Oblow, E.M.; Pin, F.G.
1985-01-01
An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs
Analysis of a Model for Computer Virus Transmission
Directory of Open Access Journals (Sweden)
Peng Qin
2015-01-01
Full Text Available Computer viruses remain a significant threat to computer networks. In this paper, the incorporation of new computers to the network and the removing of old computers from the network are considered. Meanwhile, the computers are equipped with antivirus software on the computer network. The computer virus model is established. Through the analysis of the model, disease-free and endemic equilibrium points are calculated. The stability conditions of the equilibria are derived. To illustrate our theoretical analysis, some numerical simulations are also included. The results provide a theoretical basis to control the spread of computer virus.
AIR INGRESS ANALYSIS: COMPUTATIONAL FLUID DYNAMIC MODELS
Energy Technology Data Exchange (ETDEWEB)
Chang H. Oh; Eung S. Kim; Richard Schultz; Hans Gougar; David Petti; Hyung S. Kang
2010-08-01
The Idaho National Laboratory (INL), under the auspices of the U.S. Department of Energy, is performing research and development that focuses on key phenomena important during potential scenarios that may occur in very high temperature reactors (VHTRs). Phenomena Identification and Ranking Studies to date have ranked an air ingress event, following on the heels of a VHTR depressurization, as important with regard to core safety. Consequently, the development of advanced air ingress-related models and verification and validation data are a very high priority. Following a loss of coolant and system depressurization incident, air will enter the core of the High Temperature Gas Cooled Reactor through the break, possibly causing oxidation of the in-the core and reflector graphite structure. Simple core and plant models indicate that, under certain circumstances, the oxidation may proceed at an elevated rate with additional heat generated from the oxidation reaction itself. Under postulated conditions of fluid flow and temperature, excessive degradation of the lower plenum graphite can lead to a loss of structural support. Excessive oxidation of core graphite can also lead to the release of fission products into the confinement, which could be detrimental to a reactor safety. Computational fluid dynamic model developed in this study will improve our understanding of this phenomenon. This paper presents two-dimensional and three-dimensional CFD results for the quantitative assessment of the air ingress phenomena. A portion of results of the density-driven stratified flow in the inlet pipe will be compared with results of the experimental results.
Model Uncertainty and Robustness: A Computational Framework for Multimodel Analysis
Young, Cristobal; Holsteen, Katherine
2017-01-01
Model uncertainty is pervasive in social science. A key question is how robust empirical results are to sensible changes in model specification. We present a new approach and applied statistical software for computational multimodel analysis. Our approach proceeds in two steps: First, we estimate the modeling distribution of estimates across all…
Computational Models for Analysis of Illicit Activities
DEFF Research Database (Denmark)
Nizamani, Sarwat
been explored in this thesis by considering them as epidemic-like processes. A mathematical model has been developed based on differential equations, which studies the dynamics of the issues from the very beginning until the issues cease. This study extends classical models of the spread of epidemics...... to describe the phenomenon of contagious public outrage, which eventually leads to the spread of violence following a disclosure of some unpopular political decisions and/or activity. The results shed a new light on terror activity and provide some hint on how to curb the spreading of violence within...
Global sensitivity analysis of computer models with functional inputs
International Nuclear Information System (INIS)
Iooss, Bertrand; Ribatet, Mathieu
2009-01-01
Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.
Tutorial: Parallel Computing of Simulation Models for Risk Analysis.
Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D
2016-10-01
Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.
Computational mathematics models, methods, and analysis with Matlab and MPI
White, Robert E
2004-01-01
Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...
A Computational Analysis Model for Open-ended Cognitions
Morita, Junya; Miwa, Kazuhisa
In this paper, we propose a novel usage for computational cognitive models. In cognitive science, computational models have played a critical role of theories for human cognitions. Many computational models have simulated results of controlled psychological experiments successfully. However, there have been only a few attempts to apply the models to complex realistic phenomena. We call such a situation ``open-ended situation''. In this study, MAC/FAC (``many are called, but few are chosen''), proposed by [Forbus 95], that models two stages of analogical reasoning was applied to our open-ended psychological experiment. In our experiment, subjects were presented a cue story, and retrieved cases that had been learned in their everyday life. Following this, they rated inferential soundness (goodness as analogy) of each retrieved case. For each retrieved case, we computed two kinds of similarity scores (content vectors/structural evaluation scores) using the algorithms of the MAC/FAC. As a result, the computed content vectors explained the overall retrieval of cases well, whereas the structural evaluation scores had a strong relation to the rated scores. These results support the MAC/FAC's theoretical assumption - different similarities are involved on the two stages of analogical reasoning. Our study is an attempt to use a computational model as an analysis device for open-ended human cognitions.
Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes
International Nuclear Information System (INIS)
Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.
2002-01-01
A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)
A Computable OLG Model for Gender and Growth Policy Analysis
Pierre-Richard Agénor
2012-01-01
This paper develops a computable Overlapping Generations (OLG) model for gender and growth policy analysis. The model accounts for human and physical capital accumulation (both public and private), intra- and inter-generational health persistence, fertility choices, and women's time allocation between market work, child rearing, and home production. Bargaining between spouses and gender bias, in the form of discrimination in the work place and mothers' time allocation between daughters and so...
Automated differentiation of computer models for sensitivity analysis
International Nuclear Information System (INIS)
Worley, B.A.
1990-01-01
Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems
Automated differentiation of computer models for sensitivity analysis
International Nuclear Information System (INIS)
Worley, B.A.
1991-01-01
Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab
Advanced data analysis in neuroscience integrating statistical and computational models
Durstewitz, Daniel
2017-01-01
This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering. Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will
Computer Models for IRIS Control System Transient Analysis
International Nuclear Information System (INIS)
Gary D Storrick; Bojan Petrovic; Luca Oriani
2007-01-01
This report presents results of the Westinghouse work performed under Task 3 of this Financial Assistance Award and it satisfies a Level 2 Milestone for the project. Task 3 of the collaborative effort between ORNL, Brazil and Westinghouse for the International Nuclear Energy Research Initiative entitled 'Development of Advanced Instrumentation and Control for an Integrated Primary System Reactor' focuses on developing computer models for transient analysis. This report summarizes the work performed under Task 3 on developing control system models. The present state of the IRIS plant design--such as the lack of a detailed secondary system or I and C system designs--makes finalizing models impossible at this time. However, this did not prevent making considerable progress. Westinghouse has several working models in use to further the IRIS design. We expect to continue modifying the models to incorporate the latest design information until the final IRIS unit becomes operational. Section 1.2 outlines the scope of this report. Section 2 describes the approaches we are using for non-safety transient models. It describes the need for non-safety transient analysis and the model characteristics needed to support those analyses. Section 3 presents the RELAP5 model. This is the highest-fidelity model used for benchmark evaluations. However, it is prohibitively slow for routine evaluations and additional lower-fidelity models have been developed. Section 4 discusses the current Matlab/Simulink model. This is a low-fidelity, high-speed model used to quickly evaluate and compare competing control and protection concepts. Section 5 describes the Modelica models developed by POLIMI and Westinghouse. The object-oriented Modelica language provides convenient mechanisms for developing models at several levels of detail. We have used this to develop a high-fidelity model for detailed analyses and a faster-running simplified model to help speed the I and C development process. Section
Time Series Analysis, Modeling and Applications A Computational Intelligence Perspective
Chen, Shyi-Ming
2013-01-01
Temporal and spatiotemporal data form an inherent fabric of the society as we are faced with streams of data coming from numerous sensors, data feeds, recordings associated with numerous areas of application embracing physical and human-generated phenomena (environmental data, financial markets, Internet activities, etc.). A quest for a thorough analysis, interpretation, modeling and prediction of time series comes with an ongoing challenge for developing models that are both accurate and user-friendly (interpretable). The volume is aimed to exploit the conceptual and algorithmic framework of Computational Intelligence (CI) to form a cohesive and comprehensive environment for building models of time series. The contributions covered in the volume are fully reflective of the wealth of the CI technologies by bringing together ideas, algorithms, and numeric studies, which convincingly demonstrate their relevance, maturity and visible usefulness. It reflects upon the truly remarkable diversity of methodological a...
Computer-aided pulmonary image analysis in small animal models
Energy Technology Data Exchange (ETDEWEB)
Xu, Ziyue; Mansoor, Awais; Mollura, Daniel J. [Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, Maryland 32892 (United States); Bagci, Ulas, E-mail: ulasbagci@gmail.com [Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, Florida 32816 (United States); Kramer-Marek, Gabriela [The Institute of Cancer Research, London SW7 3RP (United Kingdom); Luna, Brian [Microfluidic Laboratory Automation, University of California-Irvine, Irvine, California 92697-2715 (United States); Kubler, Andre [Department of Medicine, Imperial College London, London SW7 2AZ (United Kingdom); Dey, Bappaditya; Jain, Sanjay [Center for Tuberculosis Research, Johns Hopkins University School of Medicine, Baltimore, Maryland 21231 (United States); Foster, Brent [Department of Biomedical Engineering, University of California-Davis, Davis, California 95817 (United States); Papadakis, Georgios Z. [Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, Maryland 32892 (United States); Camp, Jeremy V. [Department of Microbiology and Immunology, University of Louisville, Louisville, Kentucky 40202 (United States); Jonsson, Colleen B. [National Institute for Mathematical and Biological Synthesis, University of Tennessee, Knoxville, Tennessee 37996 (United States); Bishai, William R. [Howard Hughes Medical Institute, Chevy Chase, Maryland 20815 and Center for Tuberculosis Research, Johns Hopkins University School of Medicine, Baltimore, Maryland 21231 (United States); Udupa, Jayaram K. [Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)
2015-07-15
Purpose: To develop an automated pulmonary image analysis framework for infectious lung diseases in small animal models. Methods: The authors describe a novel pathological lung and airway segmentation method for small animals. The proposed framework includes identification of abnormal imaging patterns pertaining to infectious lung diseases. First, the authors’ system estimates an expected lung volume by utilizing a regression function between total lung capacity and approximated rib cage volume. A significant difference between the expected lung volume and the initial lung segmentation indicates the presence of severe pathology, and invokes a machine learning based abnormal imaging pattern detection system next. The final stage of the proposed framework is the automatic extraction of airway tree for which new affinity relationships within the fuzzy connectedness image segmentation framework are proposed by combining Hessian and gray-scale morphological reconstruction filters. Results: 133 CT scans were collected from four different studies encompassing a wide spectrum of pulmonary abnormalities pertaining to two commonly used small animal models (ferret and rabbit). Sensitivity and specificity were greater than 90% for pathological lung segmentation (average dice similarity coefficient > 0.9). While qualitative visual assessments of airway tree extraction were performed by the participating expert radiologists, for quantitative evaluation the authors validated the proposed airway extraction method by using publicly available EXACT’09 data set. Conclusions: The authors developed a comprehensive computer-aided pulmonary image analysis framework for preclinical research applications. The proposed framework consists of automatic pathological lung segmentation and accurate airway tree extraction. The framework has high sensitivity and specificity; therefore, it can contribute advances in preclinical research in pulmonary diseases.
Directory of Open Access Journals (Sweden)
Alina Żogała
2014-01-01
Originality/value: This paper presents state of art in the field of coal gasification modeling using kinetic and computational fluid dynamics approach. The paper also presents own comparative analysis (concerned with mathematical formulation, input data and parameters, basic assumptions, obtained results etc. of the most important models of underground coal gasification.
Nee, John G.; Kare, Audhut P.
1987-01-01
Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)
Mathematical modellings and computational methods for structural analysis of LMFBR's
International Nuclear Information System (INIS)
Liu, W.K.; Lam, D.
1983-01-01
In this paper, two aspects of nuclear reactor problems are discussed, modelling techniques and computational methods for large scale linear and nonlinear analyses of LMFBRs. For nonlinear fluid-structure interaction problem with large deformation, arbitrary Lagrangian-Eulerian description is applicable. For certain linear fluid-structure interaction problem, the structural response spectrum can be found via 'added mass' approach. In a sense, the fluid inertia is accounted by a mass matrix added to the structural mass. The fluid/structural modes of certain fluid-structure problem can be uncoupled to get the reduced added mass. The advantage of this approach is that it can account for the many repeated structures of nuclear reactor. In regard to nonlinear dynamic problem, the coupled nonlinear fluid-structure equations usually have to be solved by direct time integration. The computation can be very expensive and time consuming for nonlinear problems. Thus, it is desirable to optimize the accuracy and computation effort by using implicit-explicit mixed time integration method. (orig.)
Causal Analysis for Performance Modeling of Computer Programs
Directory of Open Access Journals (Sweden)
Jan Lemeire
2007-01-01
Full Text Available Causal modeling and the accompanying learning algorithms provide useful extensions for in-depth statistical investigation and automation of performance modeling. We enlarged the scope of existing causal structure learning algorithms by using the form-free information-theoretic concept of mutual information and by introducing the complexity criterion for selecting direct relations among equivalent relations. The underlying probability distribution of experimental data is estimated by kernel density estimation. We then reported on the benefits of a dependency analysis and the decompositional capacities of causal models. Useful qualitative models, providing insight into the role of every performance factor, were inferred from experimental data. This paper reports on the results for a LU decomposition algorithm and on the study of the parameter sensitivity of the Kakadu implementation of the JPEG-2000 standard. Next, the analysis was used to search for generic performance characteristics of the applications.
Energy Technology Data Exchange (ETDEWEB)
Carbajo, Juan (Oak Ridge National Laboratory, Oak Ridge, TN); Jeong, Hae-Yong (Korea Atomic Energy Research Institute, Daejeon, Korea); Wigeland, Roald (Idaho National Laboratory, Idaho Falls, ID); Corradini, Michael (University of Wisconsin, Madison, WI); Schmidt, Rodney Cannon; Thomas, Justin (Argonne National Laboratory, Argonne, IL); Wei, Tom (Argonne National Laboratory, Argonne, IL); Sofu, Tanju (Argonne National Laboratory, Argonne, IL); Ludewig, Hans (Brookhaven National Laboratory, Upton, NY); Tobita, Yoshiharu (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Ohshima, Hiroyuki (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Serre, Frederic (Centre d' %C3%94etudes nucl%C3%94eaires de Cadarache %3CU%2B2013%3E CEA, France)
2011-06-01
This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the
Computational movement analysis
Laube, Patrick
2014-01-01
This SpringerBrief discusses the characteristics of spatiotemporal movement data, including uncertainty and scale. It investigates three core aspects of Computational Movement Analysis: Conceptual modeling of movement and movement spaces, spatiotemporal analysis methods aiming at a better understanding of movement processes (with a focus on data mining for movement patterns), and using decentralized spatial computing methods in movement analysis. The author presents Computational Movement Analysis as an interdisciplinary umbrella for analyzing movement processes with methods from a range of fi
Comparative study of computational model for pipe whip analysis
International Nuclear Information System (INIS)
Koh, Sugoong; Lee, Young-Shin
1993-01-01
Many types of pipe whip restraints are installed to protect the structural components from the anticipated pipe whip phenomena of high energy lines in nuclear power plants. It is necessary to investigate these phenomena accurately in order to evaluate the acceptability of the pipe whip restraint design. Various research programs have been conducted in many countries to develop analytical methods and to verify the validity of the methods. In this study, various calculational models in ANSYS code and in ADLPIPE code, the general purpose finite element computer programs, were used to simulate the postulated pipe whips to obtain impact loads and the calculated results were compared with the specific experimental results from the sample pipe whip test for the U-shaped pipe whip restraints. Some calculational models, having the spring element between the pipe whip restraint and the pipe line, give reasonably good transient responses of the restraint forces compared with the experimental results, and could be useful in evaluating the acceptability of the pipe whip restraint design. (author)
Image analysis and modeling in medical image computing. Recent developments and advances.
Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T
2012-01-01
Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body
Deterministic sensitivity and uncertainty analysis for large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.
1988-01-01
This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab
Computer-Aided Model Based Analysis for Design and Operation of a Copolymerization Process
DEFF Research Database (Denmark)
Lopez-Arenas, Maria Teresa; Sales-Cruz, Alfonso Mauricio; Gani, Rafiqul
2006-01-01
. This will allow analysis of the process behaviour, contribute to a better understanding of the polymerization process, help to avoid unsafe conditions of operation, and to develop operational and optimizing control strategies. In this work, through a computer-aided modeling system ICAS-MoT, two first......The advances in computer science and computational algorithms for process modelling, process simulation, numerical methods and design/synthesis algorithms, makes it advantageous and helpful to employ computer-aided modelling systems and tools for integrated process analysis. This is illustrated......-principles models have been investigated with respect to design and operational issues for solution copolymerization reactors in general, and for the methyl methacrylate/vinyl acetate system in particular. The Model 1 is taken from literature and is commonly used for low conversion region, while the Model 2 has...
Computational Modeling and Analysis of Mechanically Painful Stimulations
DEFF Research Database (Denmark)
Manafi Khanian, Bahram
Cuff algometry is used for quantitative assessment of deep-tissue sensitivity. The main purpose of this PhD dissertation is to provide a novel insight into the intrinsic and extrinsic factors which are involved in mechanically induced pain during cuff pressure algometry. A computational 3D finite...
Computational modeling applied to stress gradient analysis for metallic alloys
International Nuclear Information System (INIS)
Iglesias, Susana M.; Assis, Joaquim T. de; Monine, Vladimir I.
2009-01-01
Nowadays composite materials including materials reinforced by particles are the center of the researcher's attention. There are problems with the stress measurements in these materials, connected with the superficial stress gradient caused by the difference of the stress state of particles on the surface and in the matrix of the composite material. Computer simulation of diffraction profile formed by superficial layers of material allows simulate the diffraction experiment and gives the possibility to resolve the problem of stress measurements when the stress state is characterized by strong gradient. The aim of this paper is the application of computer simulation technique, initially developed for homogeneous materials, for diffraction line simulation of composite materials and alloys. Specifically we applied this technique for siluminum fabricated by powder metallurgy. (author)
De novo structural modeling and computational sequence analysis ...
African Journals Online (AJOL)
Different bioinformatics tools and machine learning techniques were used for protein structural classification. De novo protein modeling was performed by using I-TASSER server. The final model obtained was accessed by PROCHECK and DFIRE2, which confirmed that the final model is reliable. Until complete biochemical ...
Computer-aided-engineering system for modeling and analysis of ECLSS integration testing
Sepahban, Sonbol
1987-01-01
The accurate modeling and analysis of two-phase fluid networks found in environmental control and life support systems is presently undertaken by computer-aided engineering (CAE) techniques whose generalized fluid dynamics package can solve arbitrary flow networks. The CAE system for integrated test bed modeling and analysis will also furnish interfaces and subsystem/test-article mathematical models. Three-dimensional diagrams of the test bed are generated by the system after performing the requisite simulation and analysis.
Computational modeling and analysis of iron release from macrophages.
Directory of Open Access Journals (Sweden)
Alka A Potdar
2014-07-01
Full Text Available A major process of iron homeostasis in whole-body iron metabolism is the release of iron from the macrophages of the reticuloendothelial system. Macrophages recognize and phagocytose senescent or damaged erythrocytes. Then, they process the heme iron, which is returned to the circulation for reutilization by red blood cell precursors during erythropoiesis. The amount of iron released, compared to the amount shunted for storage as ferritin, is greater during iron deficiency. A currently accepted model of iron release assumes a passive-gradient with free diffusion of intracellular labile iron (Fe2+ through ferroportin (FPN, the transporter on the plasma membrane. Outside the cell, a multi-copper ferroxidase, ceruloplasmin (Cp, oxidizes ferrous to ferric ion. Apo-transferrin (Tf, the primary carrier of soluble iron in the plasma, binds ferric ion to form mono-ferric and di-ferric transferrin. According to the passive-gradient model, the removal of ferrous ion from the site of release sustains the gradient that maintains the iron release. Subcellular localization of FPN, however, indicates that the role of FPN may be more complex. By experiments and mathematical modeling, we have investigated the detailed mechanism of iron release from macrophages focusing on the roles of the Cp, FPN and apo-Tf. The passive-gradient model is quantitatively analyzed using a mathematical model for the first time. A comparison of experimental data with model simulations shows that the passive-gradient model cannot explain macrophage iron release. However, a facilitated-transport model associated with FPN can explain the iron release mechanism. According to the facilitated-transport model, intracellular FPN carries labile iron to the macrophage membrane. Extracellular Cp accelerates the oxidation of ferrous ion bound to FPN. Apo-Tf in the extracellular environment binds to the oxidized ferrous ion, completing the release process. Facilitated-transport model can
Qweak Data Analysis for Target Modeling Using Computational Fluid Dynamics
Moore, Michael; Covrig, Silviu
2015-04-01
The 2.5 kW liquid hydrogen (LH2) target used in the Qweak parity violation experiment is the highest power LH2 target in the world and the first to be designed with Computational Fluid Dynamics (CFD) at Jefferson Lab. The Qweak experiment determined the weak charge of the proton by measuring the parity-violating elastic scattering asymmetry of longitudinally polarized electrons from unpolarized liquid hydrogen at small momentum transfer (Q2 = 0 . 025 GeV2). This target met the design goals of bench-marked with the Qweak target data. This work is an essential ingredient in future designs of very high power low noise targets like MOLLER (5 kW, target noise asymmetry contribution < 25 ppm) and MESA (4.5 kW).
Modeling and Analysis of Information Attack in Computer Networks
National Research Council Canada - National Science Library
Pepyne, David
2003-01-01
.... Such attacks are particularly problematic because they take place in a "virtual cyber world" that lacks the social, economic, legal, and physical barriers and protections that control and limit crime in the material world. Research outcomes include basic theory, a modeling framework for Internet worms and email viruses, a sensor for user profiling, and a simple protocol for enhancing wireless security.
International Nuclear Information System (INIS)
Bonacorsi, D.
2007-01-01
The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community
Computer model analysis of the radial artery pressure waveform.
Schwid, H A; Taylor, L A; Smith, N T
1987-10-01
Simultaneous measurements of aortic and radial artery pressures are reviewed, and a model of the cardiovascular system is presented. The model is based on resonant networks for the aorta and axillo-brachial-radial arterial system. The model chosen is a simple one, in order to make interpretation of the observed relationships clear. Despite its simplicity, the model produces realistic aortic and radial artery pressure waveforms. It demonstrates that the resonant properties of the arterial wall significantly alter the pressure waveform as it is propagated from the aorta to the radial artery. Although the mean and end-diastolic radial pressures are usually accurate estimates of the corresponding aortic pressures, the systolic pressure at the radial artery is often much higher than that of the aorta due to overshoot caused by the resonant behavior of the radial artery. The radial artery dicrotic notch is predominantly dependent on the axillo-brachial-radial arterial wall properties, rather than on the aortic valve or peripheral resistance. Hence the use of the radial artery dicrotic notch as an estimate of end systole is unreliable. The rate of systolic upstroke, dP/dt, of the radial artery waveform is a function of many factors, making it difficult to interpret. The radial artery waveform usually provides accurate estimates for mean and diastolic aortic pressures; for all other measurements it is an inadequate substitute for the aortic pressure waveform. In the presence of low forearm peripheral resistance the mean radial artery pressure may significantly underestimate the mean aortic pressure, as explained by a voltage divider model.
Computational Analysis of 3D Ising Model Using Metropolis Algorithms
International Nuclear Information System (INIS)
Sonsin, A F; Cortes, M R; Nunes, D R; Gomes, J V; Costa, R S
2015-01-01
We simulate the Ising Model with the Monte Carlo method and use the algorithms of Metropolis to update the distribution of spins. We found that, in the specific case of the three-dimensional Ising Model, methods of Metropolis are efficient. Studying the system near the point of phase transition, we observe that the magnetization goes to zero. In our simulations we analyzed the behavior of the magnetization and magnetic susceptibility to verify the phase transition in a paramagnetic to ferromagnetic material. The behavior of the magnetization and of the magnetic susceptibility as a function of the temperature suggest a phase transition around KT/J ≈ 4.5 and was evidenced the problem of finite size of the lattice to work with large lattice. (paper)
Computer-aided modeling framework for efficient model development, analysis and identification
DEFF Research Database (Denmark)
Heitzig, Martina; Sin, Gürkan; Sales Cruz, Mauricio
2011-01-01
Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy, and water. This trend is set to continue due to the substantial benefits computer-aided...... methods introduce. The key prerequisite of computer-aided product-process engineering is however the availability of models of different types, forms, and application modes. The development of the models required for the systems under investigation tends to be a challenging and time-consuming task....... The methodology has been implemented into a computer-aided modeling framework, which combines expert skills, tools, and database connections that are required for the different steps of the model development work-flow with the goal to increase the efficiency of the modeling process. The framework has two main...
The computer model development for radionuclide migration analysis in geosphere
International Nuclear Information System (INIS)
Mulyanto
1998-01-01
1-D numerical model for safety assessment of spent fuel disposal have been developed. The numerical solution with planar geometric was developed in order to solve mass transport in heterogenous geological media. In this paper, Crank-Nicolson method was discussed for solving of radionuclide migration equation. Demonstration was done for calculation of concentration distribution of several radionuclides in the exclusion zone. It was concluded that the exclusion zone was an important concept should be adopted in determination of disposal site. Site should be selected as far as possible from fracture or as long as possible exclusion zone. (author)
International Nuclear Information System (INIS)
Storlie, Curtis B.; Swiler, Laura P.; Helton, Jon C.; Sallaberry, Cedric J.
2009-01-01
The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.
ADAM: analysis of discrete models of biological systems using computer algebra.
Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard
2011-07-20
Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web
Computational Modeling | Bioenergy | NREL
cell walls and are the source of biofuels and biomaterials. Our modeling investigates their properties . Quantum Mechanical Models NREL studies chemical and electronic properties and processes to reduce barriers Computational Modeling Computational Modeling NREL uses computational modeling to increase the
Veliz-Cuba, Alan; Aguilar, Boris; Hinkelmann, Franziska; Laubenbacher, Reinhard
2014-06-26
A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for
Computational Intelligence, Cyber Security and Computational Models
Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel
2014-01-01
This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.
Crowell, Andrew Rippetoe
This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two
International Nuclear Information System (INIS)
Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie
2012-01-01
Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.
Thorp, Scott A.
1992-01-01
This presentation will discuss the development of a NASA Geometry Exchange Specification for transferring aerodynamic surface geometry between LeRC systems and grid generation software used for computational fluid dynamics research. The proposed specification is based on a subset of the Initial Graphics Exchange Specification (IGES). The presentation will include discussion of how the NASA-IGES standard will accommodate improved computer aided design inspection methods and reverse engineering techniques currently being developed. The presentation is in viewgraph format.
NGScloud: RNA-seq analysis of non-model species using cloud computing.
Mora-Márquez, Fernando; Vázquez-Poletti, José Luis; López de Heredia, Unai
2018-05-03
RNA-seq analysis usually requires large computing infrastructures. NGScloud is a bioinformatic system developed to analyze RNA-seq data using the cloud computing services of Amazon that permit the access to ad hoc computing infrastructure scaled according to the complexity of the experiment, so its costs and times can be optimized. The application provides a user-friendly front-end to operate Amazon's hardware resources, and to control a workflow of RNA-seq analysis oriented to non-model species, incorporating the cluster concept, which allows parallel runs of common RNA-seq analysis programs in several virtual machines for faster analysis. NGScloud is freely available at https://github.com/GGFHF/NGScloud/. A manual detailing installation and how-to-use instructions is available with the distribution. unai.lopezdeheredia@upm.es.
International Nuclear Information System (INIS)
Sullivan, S.P.; Cecco, V.S.; Carter, J.R.; Spanner, M.; McElvanney, M.; Krause, T.W.; Tkaczyk, R.
2000-01-01
Licensing requirements for eddy current inspections for nuclear steam generators and heat exchangers are becoming increasingly stringent. The traditional industry-standard method of comparing inspection signals with flaw signals from simple in-line calibration standards is proving to be inadequate. A more complete understanding of eddy current and magnetic field interactions with flaws and other anomalies is required for the industry to generate consistently reliable inspections. Computer modeling is a valuable tool in improving the reliability of eddy current signal analysis. Results from computer modeling are helping inspectors to properly discriminate between real flaw signals and false calls, and improving reliability in flaw sizing. This presentation will discuss complementary eddy current computer modeling techniques such as the Finite Element Method (FEM), Volume Integral Method (VIM), Layer Approximation and other analytic methods. Each of these methods have advantages and limitations. An extension of the Layer Approximation to model eddy current probe responses to ferromagnetic materials will also be presented. Finally examples will be discussed demonstrating how some significant eddy current signal analysis problems have been resolved using appropriate electromagnetic computer modeling tools
Computer model verification for seismic analysis of vertical pumps and motors
International Nuclear Information System (INIS)
McDonald, C.K.
1993-01-01
The general principles of modeling vertical pumps and motors are discussed and then two examples of verifying the models are presented in detail. The first examples is a vertical pump and motor assembly. The model and computer analysis are presented and the first four modes (frequencies) calculated are compared to the values of the same modes obtained from a shaker test. The model used for this example is a lumped mass connected by massless beams model. The shaker test was performed by National Technical Services, Los Angeles, CA. The second example is a larger vertical motor. The model used for this example is a finite element three dimensional shell model. The first frequency obtained from this model is compared to the first frequency obtained from shop tests for several different motors. The shop tests were performed by Reliance Electric, Stratford, Ontario and Siemens-Allis, Inc., Norwood, Ohio
Model-based Computer Aided Framework for Design of Process Monitoring and Analysis Systems
DEFF Research Database (Denmark)
Singh, Ravendra; Gernaey, Krist; Gani, Rafiqul
2009-01-01
In the manufacturing industry, for example, the pharmaceutical industry, a thorough understanding of the process is necessary in addition to a properly designed monitoring and analysis system (PAT system) to consistently obtain the desired end-product properties. A model-based computer....... The knowledge base provides the necessary information/data during the design of the PAT system while the model library generates additional or missing data needed for design and analysis. Optimization of the PAT system design is achieved in terms of product data analysis time and/or cost of monitoring equipment......-aided framework including the methods and tools through which the design of monitoring and analysis systems for product quality control can be generated, analyzed and/or validated, has been developed. Two important supporting tools developed as part of the framework are a knowledge base and a model library...
A resource facility for kinetic analysis: modeling using the SAAM computer programs.
Foster, D M; Boston, R C; Jacquez, J A; Zech, L
1989-01-01
Kinetic analysis and integrated system modeling have contributed significantly to understanding the physiology and pathophysiology of metabolic systems in humans and animals. Many experimental biologists are aware of the usefulness of these techniques and recognize that kinetic modeling requires special expertise. The Resource Facility for Kinetic Analysis (RFKA) provides this expertise through: (1) development and application of modeling technology for biomedical problems, and (2) development of computer-based kinetic modeling methodologies concentrating on the computer program Simulation, Analysis, and Modeling (SAAM) and its conversational version, CONversational SAAM (CONSAM). The RFKA offers consultation to the biomedical community in the use of modeling to analyze kinetic data and trains individuals in using this technology for biomedical research. Early versions of SAAM were widely applied in solving dosimetry problems; many users, however, are not familiar with recent improvements to the software. The purpose of this paper is to acquaint biomedical researchers in the dosimetry field with RFKA, which, together with the joint National Cancer Institute-National Heart, Lung and Blood Institute project, is overseeing SAAM development and applications. In addition, RFKA provides many service activities to the SAAM user community that are relevant to solving dosimetry problems.
Michaelis, A.; Nemani, R. R.; Wang, W.; Votava, P.; Hashimoto, H.
2010-12-01
Given the increasing complexity of climate modeling and analysis tools, it is often difficult and expensive to build or recreate an exact replica of the software compute environment used in past experiments. With the recent development of new technologies for hardware virtualization, an opportunity exists to create full modeling, analysis and compute environments that are “archiveable”, transferable and may be easily shared amongst a scientific community or presented to a bureaucratic body if the need arises. By encapsulating and entire modeling and analysis environment in a virtual machine image, others may quickly gain access to the fully built system used in past experiments, potentially easing the task and reducing the costs of reproducing and verify past results produced by other researchers. Moreover, these virtual machine images may be used as a pedagogical tool for others that are interested in performing an academic exercise but don't yet possess the broad expertise required. We built two virtual machine images, one with the Community Earth System Model (CESM) and one with Weather Research Forecast Model (WRF), then ran several small experiments to assess the feasibility, performance overheads costs, reusability, and transferability. We present a list of the pros and cons as well as lessoned learned from utilizing virtualization technology in the climate and earth systems modeling domain.
PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance
International Nuclear Information System (INIS)
Vondy, D.R.
1979-10-01
The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined
PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance
Energy Technology Data Exchange (ETDEWEB)
Vondy, D.R.
1979-10-01
The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.
Matsypura, Dmytro
In this dissertation, I develop a new theoretical framework for the modeling, pricing analysis, and computation of solutions to electric power supply chains with power generators, suppliers, transmission service providers, and the inclusion of consumer demands. In particular, I advocate the application of finite-dimensional variational inequality theory, projected dynamical systems theory, game theory, network theory, and other tools that have been recently proposed for the modeling and analysis of supply chain networks (cf. Nagurney (2006)) to electric power markets. This dissertation contributes to the extant literature on the modeling, analysis, and solution of supply chain networks, including global supply chains, in general, and electric power supply chains, in particular, in the following ways. It develops a theoretical framework for modeling, pricing analysis, and computation of electric power flows/transactions in electric power systems using the rationale for supply chain analysis. The models developed include both static and dynamic ones. The dissertation also adds a new dimension to the methodology of the theory of projected dynamical systems by proving that, irrespective of the speeds of adjustment, the equilibrium of the system remains the same. Finally, I include alternative fuel suppliers, along with their behavior into the supply chain modeling and analysis framework. This dissertation has strong practical implications. In an era in which technology and globalization, coupled with increasing risk and uncertainty, complicate electricity demand and supply within and between nations, the successful management of electric power systems and pricing become increasingly pressing topics with relevance not only for economic prosperity but also national security. This dissertation addresses such related topics by providing models, pricing tools, and algorithms for decentralized electric power supply chains. This dissertation is based heavily on the following
Wang, Zhihui; Deisboeck, Thomas S.; Cristini, Vittorio
2014-01-01
There are two challenges that researchers face when performing global sensitivity analysis (GSA) on multiscale in silico cancer models. The first is increased computational intensity, since a multiscale cancer model generally takes longer to run than does a scale-specific model. The second problem is the lack of a best GSA method that fits all types of models, which implies that multiple methods and their sequence need to be taken into account. In this article, we therefore propose a sampling-based GSA workflow consisting of three phases – pre-analysis, analysis, and post-analysis – by integrating Monte Carlo and resampling methods with the repeated use of analysis of variance (ANOVA); we then exemplify this workflow using a two-dimensional multiscale lung cancer model. By accounting for all parameter rankings produced by multiple GSA methods, a summarized ranking is created at the end of the workflow based on the weighted mean of the rankings for each input parameter. For the cancer model investigated here, this analysis reveals that ERK, a downstream molecule of the EGFR signaling pathway, has the most important impact on regulating both the tumor volume and expansion rate in the algorithm used. PMID:25257020
Deterministic sensitivity and uncertainty analysis for large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.
1988-01-01
The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment
International Nuclear Information System (INIS)
Mazzelli, Federico; Little, Adrienne B.; Garimella, Srinivas; Bartosiewicz, Yann
2015-01-01
Highlights: • Computational and experimental assessment of computational techniques for ejector flows. • Comparisons to 2D/3D (k–ε, k–ε realizable, k–ω SST, and stress–ω RSM) turbulence models. • k–ω SST model performs best while ε-based models more accurate at low motive pressures. • Good on-design agreement across 2D and 3D models; off-design needs 3D simulations. - Abstract: Numerical and experimental analyses are performed on a supersonic air ejector to evaluate the effectiveness of commonly-used computational techniques when predicting ejector flow characteristics. Three series of experimental curves at different operating conditions are compared with 2D and 3D simulations using RANS, steady, wall-resolved models. Four different turbulence models are tested: k–ε, k–ε realizable, k–ω SST, and the stress–ω Reynolds Stress Model. An extensive analysis is performed to interpret the differences between numerical and experimental results. The results show that while differences between turbulence models are typically small with respect to the prediction of global parameters such as ejector inlet mass flow rates and Mass Entrainment Ratio (MER), the k–ω SST model generally performs best whereas ε-based models are more accurate at low motive pressures. Good agreement is found across all 2D and 3D models at on-design conditions. However, prediction at off-design conditions is only acceptable with 3D models, making 3D simulations mandatory to correctly predict the critical pressure and achieve reasonable results at off-design conditions. This may partly depend on the specific geometry under consideration, which in the present study has a rectangular cross section with low aspect ratio.
Plasticity: modeling & computation
National Research Council Canada - National Science Library
Borja, Ronaldo Israel
2013-01-01
.... "Plasticity Modeling & Computation" is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids...
A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models
Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.
2013-12-01
Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.
Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.
1987-01-01
The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case
Getis, Arthur
1997-01-01
In recent years, spatial analysis has become an increasingly active field, as evidenced by the establishment of educational and research programs at many universities. Its popularity is due mainly to new technologies and the development of spatial data infrastructures. This book illustrates some recent developments in spatial analysis, behavioural modelling, and computational intelligence. World renown spatial analysts explain and demonstrate their new and insightful models and methods. The applications are in areas of societal interest such as the spread of infectious diseases, migration behaviour, and retail and agricultural location strategies. In addition, there is emphasis on the uses of new technologoies for the analysis of spatial data through the application of neural network concepts.
Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González
2016-01-01
Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.
Frank, M; Pacheco, Andreu
1998-01-01
This document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \\Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The maximal I/O of these activities is estimated to be around 40 MB/s.We have studied three basic models concerning the placement of the CPU resources for the other computing activities, Monte Carlo-simulation (1:4 \\Theta 106 MIPS) and physics analysis (0:5 \\Theta 106 MIPS): CPU resources may either be located at the physicist's homelab, national computer centres (Regional Centres) or at CERN.The CPU resources foreseen for analysis are sufficient to allow 100 concurrent analyses. It is assumed that physicists will work in physics groups that produce analysis data at an average rate of 4.2 MB/s or 11 TB per month. However, producing these group analysis data requires reading capabilities of 660 MB/s. It is further assu...
Xie, W.; Li, N.; Wu, J.-D.; Hao, X.-L.
2014-04-01
Disaster damages have negative effects on the economy, whereas reconstruction investment has positive effects. The aim of this study is to model economic causes of disasters and recovery involving the positive effects of reconstruction activities. Computable general equilibrium (CGE) model is a promising approach because it can incorporate these two kinds of shocks into a unified framework and furthermore avoid the double-counting problem. In order to factor both shocks into the CGE model, direct loss is set as the amount of capital stock reduced on the supply side of the economy; a portion of investments restores the capital stock in an existing period; an investment-driven dynamic model is formulated according to available reconstruction data, and the rest of a given country's saving is set as an endogenous variable to balance the fixed investment. The 2008 Wenchuan Earthquake is selected as a case study to illustrate the model, and three scenarios are constructed: S0 (no disaster occurs), S1 (disaster occurs with reconstruction investment) and S2 (disaster occurs without reconstruction investment). S0 is taken as business as usual, and the differences between S1 and S0 and that between S2 and S0 can be interpreted as economic losses including reconstruction and excluding reconstruction, respectively. The study showed that output from S1 is found to be closer to real data than that from S2. Economic loss under S2 is roughly 1.5 times that under S1. The gap in the economic aggregate between S1 and S0 is reduced to 3% at the end of government-led reconstruction activity, a level that should take another four years to achieve under S2.
Computational neurogenetic modeling
Benuskova, Lubica
2010-01-01
Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol
Sensitivity analysis of Repast computational ecology models with R/Repast.
Prestes García, Antonio; Rodríguez-Patón, Alfonso
2016-12-01
Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.
Shielding Benchmark Computational Analysis
International Nuclear Information System (INIS)
Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.
2000-01-01
Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)
Computational fluid dynamics application: slosh analysis of a fuel tank model
International Nuclear Information System (INIS)
Iu, H.S.; Cleghorn, W.L.; Mills, J.K.
2004-01-01
This paper presents the analysis of fluid slosh behaviour inside a fuel tank model. The fuel tank model was a simplified version of a stock fuel tank that has a sloshing noise problem. A commercial CFD software, FLOW-3D, was used to simulate the slosh behaviour. Slosh experiments were performed to verify the computer simulation results. High speed video equipment enhanced with a data acquisition system was used to record the slosh experiments and to obtain the instantaneous sound level of each video frame. Five baffle configurations including the no baffle configuration were considered in the computer simulations and the experiments. The simulation results showed that the best baffle configuration can reduce the mean kinetic energy by 80% from the no baffle configuration in a certain slosh situation. The experimental results showed that 15dB(A) noise reduction can be achieved by the best baffle configuration. The correlation analysis between the mean kinetic energy and the noise level showed that high mean kinetic energy of the fluid does not always correspond to high sloshing noise. High correlation between them only occurs for the slosh situations where the fluid hits the top of the tank and creates noise. (author)
Pasculescu, Adrian; Schoof, Erwin M; Creixell, Pau; Zheng, Yong; Olhovsky, Marina; Tian, Ruijun; So, Jonathan; Vanderlaan, Rachel D; Pawson, Tony; Linding, Rune; Colwill, Karen
2014-04-04
A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts into project-specific pipelines, tracks interdependencies between related tasks, and enables the generation of summary reports as well as publication-quality images. As a result, the gap between experimental and computational components of a typical large-scale biology project is reduced, decreasing the time between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion, and modeling of multiple/selected reaction monitoring (MRM/SRM) results. CoreFlow was purposely designed as an environment for programmers to rapidly perform data analysis. These analyses are assembled into project-specific workflows that are readily shared with biologists to guide the next stages of experimentation. Its simple yet powerful interface provides a structure where scripts can be written and tested virtually simultaneously to shorten the life cycle of code development for a particular task. The scripts are exposed at every step so that a user can quickly see the relationships between the data, the assumptions that have been made, and the manipulations that have been performed. Since the scripts use commonly available programming languages, they can easily be
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.
2018-03-01
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
Energy Technology Data Exchange (ETDEWEB)
Huan, Xun [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Geraci, Gianluca [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vane, Zachary P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lacaze, Guilhem [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Oefelein, Joseph C. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2018-02-09
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
Energy Technology Data Exchange (ETDEWEB)
Braz Filho, Francisco A.; Caldeira, Alexandre D.; Borges, Eduardo M., E-mail: fbraz@ieav.cta.b, E-mail: alexdc@ieav.cta.b, E-mail: eduardo@ieav.cta.b [Instituto de Estudos Avancados (IEAv/CTA), Sao Jose dos Campos, SP (Brazil). Div. de Energia Nuclear
2011-07-01
In a heated vertical channel, the subcooled flow boiling regime occurs when the bulk fluid temperature is lower than the saturation temperature, but the fluid temperature reaches the saturation point near the channel wall. This phenomenon produces a significant increase in heat flux, limited by the critical heat flux. This study is particularly important to the thermal-hydraulics analysis of pressurized water reactors. The purpose of this work is the validation of a multidimensional model to analyze the subcooled flow boiling comparing the results with experimental data found in literature. The computational fluid dynamics code FLUENT was used with Eulerian multiphase model option. The calculated values of wall temperature in the liquid-solid interface presented an excellent agreement when compared to the experimental data. Void fraction calculations presented satisfactory results in relation to the experimental data in pressures of 15, 30 and 45 bars. (author)
CoreFlow: A computational platform for integration, analysis and modeling of complex biological data
DEFF Research Database (Denmark)
Pasculescu, Adrian; Schoof, Erwin; Creixell, Pau
2014-01-01
between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion......A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which...... provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts...
International Nuclear Information System (INIS)
Braz Filho, Francisco A.; Caldeira, Alexandre D.; Borges, Eduardo M.
2011-01-01
In a heated vertical channel, the subcooled flow boiling regime occurs when the bulk fluid temperature is lower than the saturation temperature, but the fluid temperature reaches the saturation point near the channel wall. This phenomenon produces a significant increase in heat flux, limited by the critical heat flux. This study is particularly important to the thermal-hydraulics analysis of pressurized water reactors. The purpose of this work is the validation of a multidimensional model to analyze the subcooled flow boiling comparing the results with experimental data found in literature. The computational fluid dynamics code FLUENT was used with Eulerian multiphase model option. The calculated values of wall temperature in the liquid-solid interface presented an excellent agreement when compared to the experimental data. Void fraction calculations presented satisfactory results in relation to the experimental data in pressures of 15, 30 and 45 bars. (author)
Energy Technology Data Exchange (ETDEWEB)
Kopper, Claudio, E-mail: claudio.kopper@nikhef.nl [NIKHEF, Science Park 105, 1098 XG Amsterdam (Netherlands)
2013-10-11
Completed in 2008, Antares is now the largest water Cherenkov neutrino telescope in the Northern Hemisphere. Its main goal is to detect neutrinos from galactic and extra-galactic sources. Due to the high background rate of atmospheric muons and the high level of bioluminescence, several on-line and off-line filtering algorithms have to be applied to the raw data taken by the instrument. To be able to handle this data stream, a dedicated computing infrastructure has been set up. The paper covers the main aspects of the current official Antares computing model. This includes an overview of on-line and off-line data handling and storage. In addition, the current usage of the “IceTray” software framework for Antares data processing is highlighted. Finally, an overview of the data storage formats used for high-level analysis is given.
Analysis and Research on Spatial Data Storage Model Based on Cloud Computing Platform
Hu, Yong
2017-12-01
In this paper, the data processing and storage characteristics of cloud computing are analyzed and studied. On this basis, a cloud computing data storage model based on BP neural network is proposed. In this data storage model, it can carry out the choice of server cluster according to the different attributes of the data, so as to complete the spatial data storage model with load balancing function, and have certain feasibility and application advantages.
ISAMBARD: an open-source computational environment for biomolecular analysis, modelling and design.
Wood, Christopher W; Heal, Jack W; Thomson, Andrew R; Bartlett, Gail J; Ibarra, Amaurys Á; Brady, R Leo; Sessions, Richard B; Woolfson, Derek N
2017-10-01
The rational design of biomolecules is becoming a reality. However, further computational tools are needed to facilitate and accelerate this, and to make it accessible to more users. Here we introduce ISAMBARD, a tool for structural analysis, model building and rational design of biomolecules. ISAMBARD is open-source, modular, computationally scalable and intuitive to use. These features allow non-experts to explore biomolecular design in silico. ISAMBARD addresses a standing issue in protein design, namely, how to introduce backbone variability in a controlled manner. This is achieved through the generalization of tools for parametric modelling, describing the overall shape of proteins geometrically, and without input from experimentally determined structures. This will allow backbone conformations for entire folds and assemblies not observed in nature to be generated de novo, that is, to access the 'dark matter of protein-fold space'. We anticipate that ISAMBARD will find broad applications in biomolecular design, biotechnology and synthetic biology. A current stable build can be downloaded from the python package index (https://pypi.python.org/pypi/isambard/) with development builds available on GitHub (https://github.com/woolfson-group/) along with documentation, tutorial material and all the scripts used to generate the data described in this paper. d.n.woolfson@bristol.ac.uk or chris.wood@bristol.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Topics in Modeling of Cochlear Dynamics: Computation, Response and Stability Analysis
Filo, Maurice G.
This thesis touches upon several topics in cochlear modeling. Throughout the literature, mathematical models of the cochlea vary according to the degree of biological realism to be incorporated. This thesis casts the cochlear model as a continuous space-time dynamical system using operator language. This framework encompasses a wider class of cochlear models and makes the dynamics more transparent and easier to analyze before applying any numerical method to discretize space. In fact, several numerical methods are investigated to study the computational efficiency of the finite dimensional realizations in space. Furthermore, we study the effects of the active gain perturbations on the stability of the linearized dynamics. The stability analysis is used to explain possible mechanisms underlying spontaneous otoacoustic emissions and tinnitus. Dynamic Mode Decomposition (DMD) is introduced as a useful tool to analyze the response of nonlinear cochlear models. Cochlear response features are illustrated using DMD which has the advantage of explicitly revealing the spatial modes of vibrations occurring in the Basilar Membrane (BM). Finally, we address the dynamic estimation problem of BM vibrations using Extended Kalman Filters (EKF). Due to the limitations of noninvasive sensing schemes, such algorithms are inevitable to estimate the dynamic behavior of a living cochlea.
Computer-Aided Modeling Framework
DEFF Research Database (Denmark)
Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul
Models are playing important roles in design and analysis of chemicals based products and the processes that manufacture them. Computer-aided methods and tools have the potential to reduce the number of experiments, which can be expensive and time consuming, and there is a benefit of working...... development and application. The proposed work is a part of the project for development of methods and tools that will allow systematic generation, analysis and solution of models for various objectives. It will use the computer-aided modeling framework that is based on a modeling methodology, which combines....... In this contribution, the concept of template-based modeling is presented and application is highlighted for the specific case of catalytic membrane fixed bed models. The modeling template is integrated in a generic computer-aided modeling framework. Furthermore, modeling templates enable the idea of model reuse...
Computer Profiling Based Model for Investigation
Neeraj Choudhary; Nikhil Kumar Singh; Parmalik Singh
2011-01-01
Computer profiling is used for computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgments as to the probable usage and evidentiary value of a comp...
Directory of Open Access Journals (Sweden)
Marianna eTaffi
2014-09-01
Full Text Available The pressure to search effective bioremediation methodologies for contaminated ecosystems has led to the large-scale identification of microbial species and metabolic degradation pathways. However, minor attention has been paid to the study of bioremediation in marine food webs and to the definition of integrated strategies for reducing bioaccumulation in species. We propose a novel computational framework for analysing the multiscale effects of bioremediation at the ecosystem level, based on coupling food web bioaccumulation models and metabolic models of degrading bacteria. The combination of techniques from synthetic biology and ecological network analysis allows the specification of arbitrary scenarios of contaminant removal and the evaluation of strategies based on natural or synthetic microbial strains.In this study, we derive a bioaccumulation model of polychlorinated biphenyls (PCBs in the Adriatic food web, and we extend a metabolic reconstruction of Pseudomonas putida KT2440 (iJN746 with the aerobic pathway of PCBs degradation. We assess the effectiveness of different bioremediation scenarios in reducing PCBs concentration in species and we study indices of species centrality to measure their importance in the contaminant diffusion via feeding links.The analysis of the Adriatic sea case study suggests that our framework could represent a practical tool in the design of effective remediation strategies, providing at the same time insights into the ecological role of microbial communities within food webs.
Use of logic flowgraph models in a computer aided process analysis and management system
International Nuclear Information System (INIS)
Guarro, S.B.; Okrent, D.
1985-07-01
The development of a multi-function computer-aided process analysis and management (CAPAM) system, to be implemented in nuclear power plant control rooms, is proposed and discussed. The design goals identified for such a system are early disturbance detection and diagnosis, accompanied by identification of the best possible recovery actions or alternative success paths. The CAPAM structure is articulated in three functional levels with dedicated CRT displays. Increasing amount of diagnostic or recovery information is made available to the operators at the lower display levels. Probabilistic safety margins to the loss of important safety functions may be also calculated. The proposed implementation of the CAPAM concept is based on the use of logic flowgraph networks for the more detailed system modeling. Examples of such an implementation are given. 7 refs., 3 figs., 2 tabs
Computer-Aided Modelling and Analysis of PV Systems: A Comparative Study
Directory of Open Access Journals (Sweden)
Charalambos Koukouvaos
2014-01-01
Full Text Available Modern scientific advances have enabled remarkable efficacy for photovoltaic systems with regard to the exploitation of solar energy, boosting them into having a rapidly growing position among the systems developed for the production of renewable energy. However, in many cases the design, analysis, and control of photovoltaic systems are tasks which are quite complex and thus difficult to be carried out. In order to cope with this kind of problems, appropriate software tools have been developed either as standalone products or parts of general purpose software platforms used to model and simulate the generation, transmission, and distribution of solar energy. The utilization of this kind of software tools may be extremely helpful to the successful performance evaluation of energy systems with maximum accuracy and minimum cost in time and effort. The work presented in this paper aims on a first level at the performance analysis of various configurations of photovoltaic systems through computer-aided modelling. On a second level, it provides a comparative evaluation of the credibility of two of the most advanced graphical programming environments, namely, Simulink and LabVIEW, with regard to their application in photovoltaic systems.
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...
Performance modeling and analysis of parallel Gaussian elimination on multi-core computers
Directory of Open Access Journals (Sweden)
Fadi N. Sibai
2014-01-01
Full Text Available Gaussian elimination is used in many applications and in particular in the solution of systems of linear equations. This paper presents mathematical performance models and analysis of four parallel Gaussian Elimination methods (precisely the Original method and the new Meet in the Middle –MiM– algorithms and their variants with SIMD vectorization on multi-core systems. Analytical performance models of the four methods are formulated and presented followed by evaluations of these models with modern multi-core systems’ operation latencies. Our results reveal that the four methods generally exhibit good performance scaling with increasing matrix size and number of cores. SIMD vectorization only makes a large difference in performance for low number of cores. For a large matrix size (n ⩾ 16 K, the performance difference between the MiM and Original methods falls from 16× with four cores to 4× with 16 K cores. The efficiencies of all four methods are low with 1 K cores or more stressing a major problem of multi-core systems where the network-on-chip and memory latencies are too high in relation to basic arithmetic operations. Thus Gaussian Elimination can greatly benefit from the resources of multi-core systems, but higher performance gains can be achieved if multi-core systems can be designed with lower memory operation, synchronization, and interconnect communication latencies, requirements of utmost importance and challenge in the exascale computing age.
Development of computer code models for analysis of subassembly voiding in the LMFBR
International Nuclear Information System (INIS)
Hinkle, W.
1979-12-01
The research program discussed in this report was started in FY1979 under the combined sponsorship of the US Department of Energy (DOE), General Electric (GE) and Hanford Engineering Development Laboratory (HEDL). The objective of the program is to develop multi-dimensional computer codes which can be used for the analysis of subassembly voiding incoherence under postulated accident conditions in the LMFBR. Two codes are being developed in parallel. The first will use a two fluid (6 equation) model which is more difficult to develop but has the potential for providing a code with the utmost in flexibility and physical consistency for use in the long term. The other will use a mixture (< 6 equation) model which is less general but may be more amenable to interpretation and use of experimental data and therefore, easier to develop for use in the near term. To assure that the models developed are not design dependent, geometries and transient conditions typical of both foreign and US designs are being considered
A computational model for thermal fluid design analysis of nuclear thermal rockets
International Nuclear Information System (INIS)
Given, J.A.; Anghaie, S.
1997-01-01
A computational model for simulation and design analysis of nuclear thermal propulsion systems has been developed. The model simulates a full-topping expander cycle engine system and the thermofluid dynamics of the core coolant flow, accounting for the real gas properties of the hydrogen propellant/coolant throughout the system. Core thermofluid studies reveal that near-wall heat transfer models currently available may not be applicable to conditions encountered within some nuclear rocket cores. Additionally, the possibility of a core thermal fluid instability at low mass fluxes and the effects of the core power distribution are investigated. Results indicate that for tubular core coolant channels, thermal fluid instability is not an issue within the possible range of operating conditions in these systems. Findings also show the advantages of having a nonflat centrally peaking axial core power profile from a fluid dynamic standpoint. The effects of rocket operating conditions on system performance are also investigated. Results show that high temperature and low pressure operation is limited by core structural considerations, while low temperature and high pressure operation is limited by system performance constraints. The utility of these programs for finding these operational limits, optimum operating conditions, and thermal fluid effects is demonstrated
Computational models of neuromodulation.
Fellous, J M; Linster, C
1998-05-15
Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.
Buraphadeja, Vasa; Dawson, Kara
2008-01-01
This article reviews content analysis studies aimed to assess critical thinking in computer-mediated communication. It also discusses theories and content analysis models that encourage critical thinking skills in asynchronous learning environments and reviews theories and factors that may foster critical thinking skills and new knowledge…
A formal approach to the analysis of clinical computer-interpretable guideline modeling languages.
Grando, M Adela; Glasspool, David; Fox, John
2012-01-01
To develop proof strategies to formally study the expressiveness of workflow-based languages, and to investigate their applicability to clinical computer-interpretable guideline (CIG) modeling languages. We propose two strategies for studying the expressiveness of workflow-based languages based on a standard set of workflow patterns expressed as Petri nets (PNs) and notions of congruence and bisimilarity from process calculus. Proof that a PN-based pattern P can be expressed in a language L can be carried out semi-automatically. Proof that a language L cannot provide the behavior specified by a PNP requires proof by exhaustion based on analysis of cases and cannot be performed automatically. The proof strategies are generic but we exemplify their use with a particular CIG modeling language, PROforma. To illustrate the method we evaluate the expressiveness of PROforma against three standard workflow patterns and compare our results with a previous similar but informal comparison. We show that the two proof strategies are effective in evaluating a CIG modeling language against standard workflow patterns. We find that using the proposed formal techniques we obtain different results to a comparable previously published but less formal study. We discuss the utility of these analyses as the basis for principled extensions to CIG modeling languages. Additionally we explain how the same proof strategies can be reused to prove the satisfaction of patterns expressed in the declarative language CIGDec. The proof strategies we propose are useful tools for analysing the expressiveness of CIG modeling languages. This study provides good evidence of the benefits of applying formal methods of proof over semi-formal ones. Copyright © 2011 Elsevier B.V. All rights reserved.
Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models
Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.
2016-01-01
A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of
Directory of Open Access Journals (Sweden)
Luana Souto Barros
2014-12-01
Full Text Available OBJECTIVE: To study the effects of an oronasal interface (OI for noninvasive ventilation, using a three-dimensional (3D computational model with the ability to simulate and evaluate the main pressure zones (PZs of the OI on the human face. METHODS: We used a 3D digital model of the human face, based on a pre-established geometric model. The model simulated soft tissues, skull, and nasal cartilage. The geometric model was obtained by 3D laser scanning and post-processed for use in the model created, with the objective of separating the cushion from the frame. A computer simulation was performed to determine the pressure required in order to create the facial PZs. We obtained descriptive graphical images of the PZs and their intensity. RESULTS: For the graphical analyses of each face-OI model pair and their respective evaluations, we ran 21 simulations. The computer model identified several high-impact PZs in the nasal bridge and paranasal regions. The variation in soft tissue depth had a direct impact on the amount of pressure applied (438-724 cmH2O. CONCLUSIONS: The computer simulation results indicate that, in patients submitted to noninvasive ventilation with an OI, the probability of skin lesion is higher in the nasal bridge and paranasal regions. This methodology could increase the applicability of biomechanical research on noninvasive ventilation interfaces, providing the information needed in order to choose the interface that best minimizes the risk of skin lesion.
Exploring Parameter Tuning for Analysis and Optimization of a Computational Model
Mollee, J.S.; Fernandes de Mello Araujo, E.; Klein, M.C.A.
2017-01-01
Computational models of human processes are used for many different purposes and in many different types of applications. A common challenge in using such models is to find suitable parameter values. In many cases, the ideal parameter values are those that yield the most realistic simulation
Use of personal computers in performing a linear modal analysis of a large finite-element model
International Nuclear Information System (INIS)
Wagenblast, G.R.
1991-01-01
This paper presents the use of personal computers in performing a dynamic frequency analysis of a large (2,801 degrees of freedom) finite-element model. Large model linear time history dynamic evaluations of safety related structures were previously restricted to mainframe computers using direct integration analysis methods. This restriction was a result of the limited memory and speed of personal computers. With the advances in memory capacity and speed of the personal computers, large finite-element problems now can be solved in the office in a timely and cost effective manner. Presented in three sections, this paper describes the procedure used to perform the dynamic frequency analysis of the large (2,801 degrees of freedom) finite-element model on a personal computer. Section 2.0 describes the structure and the finite-element model that was developed to represent the structure for use in the dynamic evaluation. Section 3.0 addresses the hardware and software used to perform the evaluation and the optimization of the hardware and software operating configuration to minimize the time required to perform the analysis. Section 4.0 explains the analysis techniques used to reduce the problem to a size compatible with the hardware and software memory capacity and configuration
Geant4 Hadronic Cascade Models and CMS Data Analysis : Computational Challenges in the LHC era
Heikkinen, Aatos
This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we es...
Computational-Model-Based Analysis of Context Effects on Harmonic Expectancy.
Morimoto, Satoshi; Remijn, Gerard B; Nakajima, Yoshitaka
2016-01-01
Expectancy for an upcoming musical chord, harmonic expectancy, is supposedly based on automatic activation of tonal knowledge. Since previous studies implicitly relied on interpretations based on Western music theory, the underlying computational processes involved in harmonic expectancy and how it relates to tonality need further clarification. In particular, short chord sequences which cannot lead to unique keys are difficult to interpret in music theory. In this study, we examined effects of preceding chords on harmonic expectancy from a computational perspective, using stochastic modeling. We conducted a behavioral experiment, in which participants listened to short chord sequences and evaluated the subjective relatedness of the last chord to the preceding ones. Based on these judgments, we built stochastic models of the computational process underlying harmonic expectancy. Following this, we compared the explanatory power of the models. Our results imply that, even when listening to short chord sequences, internally constructed and updated tonal assumptions determine the expectancy of the upcoming chord.
Enin, S. S.; Omelchenko, E. Y.; Fomin, N. V.; Beliy, A. V.
2018-03-01
The paper has a description of a computer model of an overhead crane system. The designed overhead crane system consists of hoisting, trolley and crane mechanisms as well as a payload two-axis system. With the help of the differential equation of specified mechanisms movement derived through Lagrange equation of the II kind, it is possible to build an overhead crane computer model. The computer model was obtained using Matlab software. Transients of coordinate, linear speed and motor torque of trolley and crane mechanism systems were simulated. In addition, transients of payload swaying were obtained with respect to the vertical axis. A trajectory of the trolley mechanism with simultaneous operation with the crane mechanism is represented in the paper as well as a two-axis trajectory of payload. The designed computer model of an overhead crane is a great means for studying positioning control and anti-sway control systems.
Czaja, Klaudia; Matula, Rafal
2014-05-01
The paper presents analysis of the possibilities of application geophysical methods to investigation groundwater conditions. In this paper groundwater is defined as liquid water flowing through shallow aquifers. Groundwater conditions are described through the distribution of permeable layers (like sand, gravel, fractured rock) and impermeable or low-permeable layers (like clay, till, solid rock) in the subsurface. GPR (Ground Penetrating Radar), ERT(Electrical Resistivity Tomography), VES (Vertical Electric Soundings) and seismic reflection, refraction and MASW (Multichannel Analysis of Surface Waves) belong to non - invasive, surface, geophysical methods. Due to differences in physical parameters like dielectric constant, resistivity, density and elastic properties for saturated and saturated zones it is possible to use geophysical techniques for groundwater investigations. Few programmes for GPR, ERT, VES and seismic modelling were applied in order to verify and compare results. Models differ in values of physical parameters such as dielectric constant, electrical conductivity, P and S-wave velocity and the density, layers thickness and the depth of occurrence of the groundwater level. Obtained results for computer modelling for GPR and seismic methods and interpretation of test field measurements are presented. In all of this methods vertical resolution is the most important issue in groundwater investigations. This require proper measurement methodology e.g. antennas with frequencies high enough, Wenner array in electrical surveys, proper geometry for seismic studies. Seismic velocities of unconsolidated rocks like sand and gravel are strongly influenced by porosity and water saturation. No influence of water saturation degree on seismic velocities is observed below a value of about 90% water saturation. A further saturation increase leads to a strong increase of P-wave velocity and a slight decrease of S-wave velocity. But in case of few models only the
Application of a coupled kinetics-thermalhydraulic computer model to noise analysis
International Nuclear Information System (INIS)
Miguel Cecenas Falcon; Rina M Campos-Gonzalez; Edmundo del Valle Gallegos
2005-01-01
Full text of publication follows: Noise analysis is a common tool to evaluate dynamic properties of a Boiling Water Reactor using the power measurements provided by the nuclear instrumentation, such as LPRMs. Stability monitors use noise analysis to evaluate the system decay ratio, and they require a large amount of data in order to test and validate the algorithms and produce reliable monitoring. Because a nuclear reactor normally operates at or near nominal conditions, the amount of interesting stationary data at relatively low power is limited. There are very important stability benchmarks that recorded power for a number of cases involving operation close to the stability boundary, and even during a fully developed limit cycle, but in general there is a limited amount of data at points of interest in the power and flow map, to be used for stability studies. Under this limitation, a model that can generate all the required information for any point in the power and flow map is useful. Particular importance has the capacity to generate time series equivalent to noisy LPRM signals close to the natural circulation line, and test the early detection of still incipient out of phase oscillations in the core. In order to generate these signals, a model of 36 parallel boiling channels is prepared to reproduce the benchmark conditions of the Ringhals core at test point 9 during cycle 14. Each channel considers one phase region, subcooled boiling and bulk boiling. The power, flow and average void fraction at each of the 36 channel were reproduced to define a stationary model which is perturbed with additive white noise in order to generate void fraction fluctuations. The cross sections are a function of void fractions, hence the fluctuations are transmitted to the neutronics and finally to the power. The power fluctuations are analog to those produced by bubble generation and collapsing during the boiling process. The neutronics is modeled with a two-dimensional nodal
Compound analysis of gallstones using dual energy computed tomography-Results in a phantom model
Energy Technology Data Exchange (ETDEWEB)
Bauer, Ralf W., E-mail: ralfwbauer@aol.co [Department of Diagnostic and Interventional Radiology, Clinic of the Goethe University Frankfurt, Theodor-Stern-Kai 7, 60596 Frankfurt (Germany); Schulz, Julian R., E-mail: julian.schulz@t-online.d [Department of Diagnostic and Interventional Radiology, Clinic of the Goethe University Frankfurt, Theodor-Stern-Kai 7, 60596 Frankfurt (Germany); Zedler, Barbara, E-mail: zedler@em.uni-frankfurt.d [Department of Forensic Medicine, Clinic of the Goethe University Frankfurt, Kennedyallee 104, 60596 Frankfurt (Germany); Graf, Thomas G., E-mail: thomas.gt.graf@siemens.co [Siemens AG Healthcare Sector, Computed Tomography, Physics and Applications, Siemensstrasse 1, 91313 Forchheim (Germany); Vogl, Thomas J., E-mail: t.vogl@em.uni-frankfurt.d [Department of Diagnostic and Interventional Radiology, Clinic of the Goethe University Frankfurt, Theodor-Stern-Kai 7, 60596 Frankfurt (Germany)
2010-07-15
Purpose: The potential of dual energy computed tomography (DECT) for the analysis of gallstone compounds was investigated. The main goal was to find parameters, that can reliably define high percentage (>70%) cholesterol stones without calcium components. Materials and methods: 35 gallstones were analyzed with DECT using a phantom model. Stone samples were put into specimen containers filled with formalin. Containers were put into a water-filled cylindrical acrylic glass phantom. DECT scans were performed using a tube voltage/current of 140 kV/83 mAs (tube A) and 80 kV/340 mAs (tube B). ROI-measurements to determine CT attenuation of each sector of the stones that had different appearance on the CT images were performed. Finally, semi-quantitative infrared spectroscopy (FTIR) of these sectors was performed for chemical analysis. Results: ROI-measurements were performed in 45 different sectors in 35 gallstones. Sectors containing >70% of cholesterol and no calcium component (n = 20) on FTIR could be identified with 95% sensitivity and 100% specificity on DECT. These sectors showed typical attenuation of -8 {+-} 4 HU at 80 kV and +22 {+-} 3 HU at 140 kV. Even the presence of a small calcium component (<10%) hindered the reliable identification of cholesterol components as such. Conclusion: Dual energy CT allows for reliable identification of gallstones containing a high percentage of cholesterol and no calcium component in this pre-clinical phantom model. Results from in vivo or anthropomorphic phantom trials will have to confirm these results. This may enable the identification of patients eligible for non-surgical treatment options in the future.
Compound analysis of gallstones using dual energy computed tomography-Results in a phantom model
International Nuclear Information System (INIS)
Bauer, Ralf W.; Schulz, Julian R.; Zedler, Barbara; Graf, Thomas G.; Vogl, Thomas J.
2010-01-01
Purpose: The potential of dual energy computed tomography (DECT) for the analysis of gallstone compounds was investigated. The main goal was to find parameters, that can reliably define high percentage (>70%) cholesterol stones without calcium components. Materials and methods: 35 gallstones were analyzed with DECT using a phantom model. Stone samples were put into specimen containers filled with formalin. Containers were put into a water-filled cylindrical acrylic glass phantom. DECT scans were performed using a tube voltage/current of 140 kV/83 mAs (tube A) and 80 kV/340 mAs (tube B). ROI-measurements to determine CT attenuation of each sector of the stones that had different appearance on the CT images were performed. Finally, semi-quantitative infrared spectroscopy (FTIR) of these sectors was performed for chemical analysis. Results: ROI-measurements were performed in 45 different sectors in 35 gallstones. Sectors containing >70% of cholesterol and no calcium component (n = 20) on FTIR could be identified with 95% sensitivity and 100% specificity on DECT. These sectors showed typical attenuation of -8 ± 4 HU at 80 kV and +22 ± 3 HU at 140 kV. Even the presence of a small calcium component (<10%) hindered the reliable identification of cholesterol components as such. Conclusion: Dual energy CT allows for reliable identification of gallstones containing a high percentage of cholesterol and no calcium component in this pre-clinical phantom model. Results from in vivo or anthropomorphic phantom trials will have to confirm these results. This may enable the identification of patients eligible for non-surgical treatment options in the future.
Noise analysis of genome-scale protein synthesis using a discrete computational model of translation
Energy Technology Data Exchange (ETDEWEB)
Racle, Julien; Hatzimanikatis, Vassily, E-mail: vassily.hatzimanikatis@epfl.ch [Laboratory of Computational Systems Biotechnology, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland); Swiss Institute of Bioinformatics (SIB), CH-1015 Lausanne (Switzerland); Stefaniuk, Adam Jan [Laboratory of Computational Systems Biotechnology, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland)
2015-07-28
Noise in genetic networks has been the subject of extensive experimental and computational studies. However, very few of these studies have considered noise properties using mechanistic models that account for the discrete movement of ribosomes and RNA polymerases along their corresponding templates (messenger RNA (mRNA) and DNA). The large size of these systems, which scales with the number of genes, mRNA copies, codons per mRNA, and ribosomes, is responsible for some of the challenges. Additionally, one should be able to describe the dynamics of ribosome exchange between the free ribosome pool and those bound to mRNAs, as well as how mRNA species compete for ribosomes. We developed an efficient algorithm for stochastic simulations that addresses these issues and used it to study the contribution and trade-offs of noise to translation properties (rates, time delays, and rate-limiting steps). The algorithm scales linearly with the number of mRNA copies, which allowed us to study the importance of genome-scale competition between mRNAs for the same ribosomes. We determined that noise is minimized under conditions maximizing the specific synthesis rate. Moreover, sensitivity analysis of the stochastic system revealed the importance of the elongation rate in the resultant noise, whereas the translation initiation rate constant was more closely related to the average protein synthesis rate. We observed significant differences between our results and the noise properties of the most commonly used translation models. Overall, our studies demonstrate that the use of full mechanistic models is essential for the study of noise in translation and transcription.
Stability and Bifurcation Analysis of a Modified Epidemic Model for Computer Viruses
Directory of Open Access Journals (Sweden)
Chuandong Li
2014-01-01
Full Text Available We extend the three-dimensional SIR model to four-dimensional case and then analyze its dynamical behavior including stability and bifurcation. It is shown that the new model makes a significant improvement to the epidemic model for computer viruses, which is more reasonable than the most existing SIR models. Furthermore, we investigate the stability of the possible equilibrium point and the existence of the Hopf bifurcation with respect to the delay. By analyzing the associated characteristic equation, it is found that Hopf bifurcation occurs when the delay passes through a sequence of critical values. An analytical condition for determining the direction, stability, and other properties of bifurcating periodic solutions is obtained by using the normal form theory and center manifold argument. The obtained results may provide a theoretical foundation to understand the spread of computer viruses and then to minimize virus risks.
Moreo, P.; Gaffney, E. A.; Garcí a-Aznar, J. M.; Doblaré , M.
2009-01-01
The diversity of biological form is generated by a relatively small number of underlying mechanisms. Consequently, mathematical and computational modelling can, and does, provide insight into how cellular level interactions ultimately give rise
Computer Modeling and Simulation
Energy Technology Data Exchange (ETDEWEB)
Pronskikh, V. S. [Fermilab
2014-05-09
Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes
Slepian modeling as a computational method in random vibration analysis of hysteretic structures
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob
1999-01-01
white noise. The computation time for obtaining estimates of relevant statistics on a given accuracy level is decreased by factors of one ormore orders of size as compared to the computation time needed for direct elasto-plastic displacementresponse simulations by vectorial Markov sequence techniques....... Moreover the Slepian method gives valuablephysical insight about the details of the plastic displacement development by time.The paper gives a general self-contained mathematical description of the Slepian method based plasticdisplacement analysis of Gaussian white noise excited EPOs. Experiences...
DEFF Research Database (Denmark)
This book provides an in-depth introduction and overview of current research in computational music analysis. Its seventeen chapters, written by leading researchers, collectively represent the diversity as well as the technical and philosophical sophistication of the work being done today...... on well-established theories in music theory and analysis, such as Forte's pitch-class set theory, Schenkerian analysis, the methods of semiotic analysis developed by Ruwet and Nattiez, and Lerdahl and Jackendoff's Generative Theory of Tonal Music. The book is divided into six parts, covering...... music analysis, the book provides an invaluable resource for researchers, teachers and students in music theory and analysis, computer science, music information retrieval and related disciplines. It also provides a state-of-the-art reference for practitioners in the music technology industry....
International Nuclear Information System (INIS)
Grandi, C; Bonacorsi, D; Colling, D; Fisk, I; Girone, M
2014-01-01
The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.
Berahmani, Sanaz; Janssen, Dennis; Verdonschot, Nico
2017-01-01
It is essential to calculate micromotions at the bone-implant interface of an uncemented femoral total knee replacement (TKR) using a reliable computational model. In the current study, experimental measurements of micromotions were compared with predicted micromotions by Finite Element Analysis
Moreo, P.
2009-11-14
The diversity of biological form is generated by a relatively small number of underlying mechanisms. Consequently, mathematical and computational modelling can, and does, provide insight into how cellular level interactions ultimately give rise to higher level structure. Given cells respond to mechanical stimuli, it is therefore important to consider the effects of these responses within biological self-organisation models. Here, we consider the self-organisation properties of a mechanochemical model previously developed by three of the authors in Acta Biomater. 4, 613-621 (2008), which is capable of reproducing the behaviour of a population of cells cultured on an elastic substrate in response to a variety of stimuli. In particular, we examine the conditions under which stable spatial patterns can emerge with this model, focusing on the influence of mechanical stimuli and the interplay of non-local phenomena. To this end, we have performed a linear stability analysis and numerical simulations based on a mixed finite element formulation, which have allowed us to study the dynamical behaviour of the system in terms of the qualitative shape of the dispersion relation. We show that the consideration of mechanotaxis, namely changes in migration speeds and directions in response to mechanical stimuli alters the conditions for pattern formation in a singular manner. Furthermore without non-local effects, responses to mechanical stimuli are observed to result in dispersion relations with positive growth rates at arbitrarily large wavenumbers, in turn yielding heterogeneity at the cellular level in model predictions. This highlights the sensitivity and necessity of non-local effects in mechanically influenced biological pattern formation models and the ultimate failure of the continuum approximation in their absence. © 2009 Society for Mathematical Biology.
Computational analysis of integrated biosensing and shear flow in a microfluidic vascular model
Wong, Jeremy F.; Young, Edmond W. K.; Simmons, Craig A.
2017-11-01
Fluid flow and flow-induced shear stress are critical components of the vascular microenvironment commonly studied using microfluidic cell culture models. Microfluidic vascular models mimicking the physiological microenvironment also offer great potential for incorporating on-chip biomolecular detection. In spite of this potential, however, there are few examples of such functionality. Detection of biomolecules released by cells under flow-induced shear stress is a significant challenge due to severe sample dilution caused by the fluid flow used to generate the shear stress, frequently to the extent where the analyte is no longer detectable. In this work, we developed a computational model of a vascular microfluidic cell culture model that integrates physiological shear flow and on-chip monitoring of cell-secreted factors. Applicable to multilayer device configurations, the computational model was applied to a bilayer configuration, which has been used in numerous cell culture applications including vascular models. Guidelines were established that allow cells to be subjected to a wide range of physiological shear stress while ensuring optimal rapid transport of analyte to the biosensor surface and minimized biosensor response times. These guidelines therefore enable the development of microfluidic vascular models that integrate cell-secreted factor detection while addressing flow constraints imposed by physiological shear stress. Ultimately, this work will result in the addition of valuable functionality to microfluidic cell culture models that further fulfill their potential as labs-on-chips.
Computationally Modeling Interpersonal Trust
Directory of Open Access Journals (Sweden)
Jin Joo eLee
2013-12-01
Full Text Available We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind’s readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naivete' of this domain knowledge. We then present the construction of hidden Markov models to incorporate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.
Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis; Atkins, David C; Narayanan, Shrikanth S
2016-05-01
Empathy is an important psychological process that facilitates human communication and interaction. Enhancement of empathy has profound significance in a range of applications. In this paper, we review emerging directions of research on computational analysis of empathy expression and perception as well as empathic interactions, including their simulation. We summarize the work on empathic expression analysis by the targeted signal modalities (e.g., text, audio, and facial expressions). We categorize empathy simulation studies into theory-based emotion space modeling or application-driven user and context modeling. We summarize challenges in computational study of empathy including conceptual framing and understanding of empathy, data availability, appropriate use and validation of machine learning techniques, and behavior signal processing. Finally, we propose a unified view of empathy computation and offer a series of open problems for future research.
Directory of Open Access Journals (Sweden)
J. D. Herman
2013-07-01
Full Text Available The increase in spatially distributed hydrologic modeling warrants a corresponding increase in diagnostic methods capable of analyzing complex models with large numbers of parameters. Sobol' sensitivity analysis has proven to be a valuable tool for diagnostic analyses of hydrologic models. However, for many spatially distributed models, the Sobol' method requires a prohibitive number of model evaluations to reliably decompose output variance across the full set of parameters. We investigate the potential of the method of Morris, a screening-based sensitivity approach, to provide results sufficiently similar to those of the Sobol' method at a greatly reduced computational expense. The methods are benchmarked on the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM over a six-month period in the Blue River watershed, Oklahoma, USA. The Sobol' method required over six million model evaluations to ensure reliable sensitivity indices, corresponding to more than 30 000 computing hours and roughly 180 gigabytes of storage space. We find that the method of Morris is able to correctly screen the most and least sensitive parameters with 300 times fewer model evaluations, requiring only 100 computing hours and 1 gigabyte of storage space. The method of Morris proves to be a promising diagnostic approach for global sensitivity analysis of highly parameterized, spatially distributed hydrologic models.
International Nuclear Information System (INIS)
Rojas-Sola, José Ignacio; Bouza-Rodríguez, José Benito; Menéndez-Díaz, Agustín
2016-01-01
Highlights: • Technical and functional analysis of the two typologies of windmills in Spain. • Spatial distribution of velocities and pressures by computational-fluid dynamics (CFD). • Finite-element analysis (FEA) of the rotors of these two types of windmills. • Validation of the operative functionality of these windmills. - Abstract: A detailed study has been made of the two typologies of windmills in Spain, specifically the rectangular-bladed type, represented by the windmill ‘Sardinero’, located near the town of Campo de Criptana (Ciudad Real province, Spain) and the type with triangular sails (lateens), represented by the windmill ‘San Francisco’, in the town of Vejer de la Frontera (Cádiz province, Spain). For this, an ad hoc research methodology has been applied on the basis of three aspects: three-dimensional geometric modeling, analysis by computational-fluid dynamics (CFD), and finite-element analysis (FEA). The results found with the CFD technique show the correct functioning of the two windmills in relation to the spatial distribution of the wind velocities and pressures to which each is normally exposed (4–7 m/s in the case of ‘Sardinero’, and 5–11 for ‘San Francisco’), thereby validating the operative functionality of both types. In addition, as a result of the FEA, the spatial distribution of stresses on the rotor has revealed that the greatest concentrations of these occurs in the teeth of the head wheel in ‘Sardinero’, reaching a value of 12 MPa, and at the base of the masts in the case of the ‘San Francisco’, with a value of 24 MPa. Also, this analysis evidences that simple, effective designs to reinforce the masts absorb a great concentration of stresses that would otherwise cause breakage. Furthermore, it was confirmed that the oak wood from which the rotors were made functioned properly, as the windmill never exceeded the maximum admissible working stress, demonstrating the effectiveness of the materials
Analysis and Modeling of Social In uence in High Performance Computing Workloads
Zheng, Shuai
2011-06-01
High Performance Computing (HPC) is becoming a common tool in many research areas. Social influence (e.g., project collaboration) among increasing users of HPC systems creates bursty behavior in underlying workloads. This bursty behavior is increasingly common with the advent of grid computing and cloud computing. Mining the user bursty behavior is important for HPC workloads prediction and scheduling, which has direct impact on overall HPC computing performance. A representative work in this area is the Mixed User Group Model (MUGM), which clusters users according to the resource demand features of their submissions, such as duration time and parallelism. However, MUGM has some difficulties when implemented in real-world system. First, representing user behaviors by the features of their resource demand is usually difficult. Second, these features are not always available. Third, measuring the similarities among users is not a well-defined problem. In this work, we propose a Social Influence Model (SIM) to identify, analyze, and quantify the level of social influence across HPC users. The advantage of the SIM model is that it finds HPC communities by analyzing user job submission time, thereby avoiding the difficulties of MUGM. An offline algorithm and a fast-converging, computationally-efficient online learning algorithm for identifying social groups are proposed. Both offline and online algorithms are applied on several HPC and grid workloads, including Grid 5000, EGEE 2005 and 2007, and KAUST Supercomputing Lab (KSL) BGP data. From the experimental results, we show the existence of a social graph, which is characterized by a pattern of dominant users and followers. In order to evaluate the effectiveness of identified user groups, we show the pattern discovered by the offline algorithm follows a power-law distribution, which is consistent with those observed in mainstream social networks. We finally conclude the thesis and discuss future directions of our work.
Energy Technology Data Exchange (ETDEWEB)
O' Kula, K. R. [Savannah River Site (SRS), Aiken, SC (United States); East, J. M. [Savannah River Site (SRS), Aiken, SC (United States); Weber, A. H. [Savannah River Site (SRS), Aiken, SC (United States); Savino, A. V. [Savannah River Site (SRS), Aiken, SC (United States); Mazzola, C. A. [Savannah River Site (SRS), Aiken, SC (United States)
2003-01-01
The evaluation of atmospheric dispersion/ radiological dose analysis codes included fifteen models identified in authorization basis safety analysis at DOE facilities, or from regulatory and research agencies where past or current work warranted inclusion of a computer model. All computer codes examined were reviewed using general and specific evaluation criteria developed by the Working Group. The criteria were based on DOE Orders and other regulatory standards and guidance for performing bounding and conservative dose calculations. Included were three categories of criteria: (1) Software Quality/User Interface; (2) Technical Model Adequacy; and (3) Application/Source Term Environment. A consensus-based limited quantitative ranking process was used to base an order of model preference as both an overall conclusion, and under specific conditions.
Guohua Fang; Ting Wang; Xinyi Si; Xin Wen; Yu Liu
2016-01-01
To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE) model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and out...
A Sensitivity Analysis of a Computer Model-Based Leak Detection System for Oil Pipelines
Zhe Lu; Yuntong She; Mark Loewen
2017-01-01
Improving leak detection capability to eliminate undetected releases is an area of focus for the energy pipeline industry, and the pipeline companies are working to improve existing methods for monitoring their pipelines. Computer model-based leak detection methods that detect leaks by analyzing the pipeline hydraulic state have been widely employed in the industry, but their effectiveness in practical applications is often challenged by real-world uncertainties. This study quantitatively ass...
Computer aided safety analysis
International Nuclear Information System (INIS)
1988-05-01
The document reproduces 20 selected papers from the 38 papers presented at the Technical Committee/Workshop on Computer Aided Safety Analysis organized by the IAEA in co-operation with the Institute of Atomic Energy in Otwock-Swierk, Poland on 25-29 May 1987. A separate abstract was prepared for each of these 20 technical papers. Refs, figs and tabs
CORCON-MOD3: An integrated computer model for analysis of molten core-concrete interactions
International Nuclear Information System (INIS)
Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O.
1993-10-01
The CORCON-Mod3 computer code was developed to mechanistically model the important core-concrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user's manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given
Energy Technology Data Exchange (ETDEWEB)
Carlberg, Kevin Thomas [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Drohmann, Martin [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Tuminaro, Raymond S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Computational Mathematics; Boggs, Paul T. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Optimization and Uncertainty Estimation
2014-10-01
Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model
Chaos Modelling with Computers
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Chaos Modelling with Computers Unpredicatable Behaviour of Deterministic Systems. Balakrishnan Ramasamy T S K V Iyer. General Article Volume 1 Issue 5 May 1996 pp 29-39 ...
DEFF Research Database (Denmark)
Sales-Cruz, Alfonso Mauricio; Gani, Rafiqul
2006-01-01
method, suitable for separation and purification of thermally unstable materials whose design and analysis can be efficiently performed through reliable model-based techniques. This paper presents a generalized model for short-path evaporation and highlights its development, implementation and solution...
Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.
2006-01-01
Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…
PeTTSy: a computational tool for perturbation analysis of complex systems biology models.
Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A
2016-03-10
Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and
Chong, Christopher
2018-01-01
This book summarizes a number of fundamental developments at the interface of granular crystals and the mathematical and computational analysis of some of their key localized nonlinear wave solutions. The subject presents a blend of the appeal of granular crystals as a prototypical engineering tested for a variety of diverse applications, the novelty in the nonlinear physics of its coherent structures, and the tractability of a series of mathematical and computational techniques to analyse them. While the focus is on principal one-dimensional solutions such as shock waves, traveling waves, and discrete breathers, numerous extensions of the discussed patterns, e.g., in two dimensions, chains with defects, heterogeneous settings, and other recent developments are discussed. The book appeals to researchers in the field, as well as for graduate and advanced undergraduate students. It will be of interest to mathematicians, physicists and engineers alike.
International Nuclear Information System (INIS)
Knee, H.E.; Haas, P.M.
1985-01-01
A computer model has been developed, sensitivity tested, and evaluated capable of generating reliable estimates of human performance measures in the nuclear power plant (NPP) maintenance context. The model, entitled MAPPS (Maintenance Personnel Performance Simulation), is of the simulation type and is task-oriented. It addresses a number of person-machine, person-environment, and person-person variables and is capable of providing the user with a rich spectrum of important performance measures including mean time for successful task performance by a maintenance team and maintenance team probability of task success. These two measures are particularly important for input to probabilistic risk assessment (PRA) studies which were the primary impetus for the development of MAPPS. The simulation nature of the model along with its generous input parameters and output variables allows its usefulness to extend beyond its input to PRA
Directory of Open Access Journals (Sweden)
Jayr Figueiredo de Oliveira
2013-10-01
Full Text Available This article proposes an information technology model to evaluate fleet management failure. Qualitative research done by a case study within an Interstate Transport company in a São Paulo State proposed to establish a relationship between computer tools and valid trustworthy information needs, and within an acceptable timeframe, for decision making, reliability, availability and system management. Additionally, the study aimed to provide relevant and precise information, in order to minimize and mitigate failure actions that may occur, compromising all operational organization base functioning.
2017-04-03
accuracy and stability of the model results. 4. CFD Experiment OpenFOAM (Open source Field Operation and Manipulation) is an open-source CFD toolbox...that enables customization of applications in continuum mechanics and chemical processes. The InterFOAM solver within OpenFOAM makes use of the...the solution from OpenFOAM . The computational domains correspond to a flume of 60 m long and 2.5 m high with the bottom step at the center. The height
International Nuclear Information System (INIS)
Max, G
2011-01-01
Traffic models in computer networks can be described as a complicated system. These systems show non-linear features and to simulate behaviours of these systems are also difficult. Before implementing network equipments users wants to know capability of their computer network. They do not want the servers to be overloaded during temporary traffic peaks when more requests arrive than the server is designed for. As a starting point for our study a non-linear system model of network traffic is established to exam behaviour of the network planned. The paper presents setting up a non-linear simulation model that helps us to observe dataflow problems of the networks. This simple model captures the relationship between the competing traffic and the input and output dataflow. In this paper, we also focus on measuring the bottleneck of the network, which was defined as the difference between the link capacity and the competing traffic volume on the link that limits end-to-end throughput. We validate the model using measurements on a working network. The results show that the initial model estimates well main behaviours and critical parameters of the network. Based on this study, we propose to develop a new algorithm, which experimentally determines and predict the available parameters of the network modelled.
Schu, Kathryn L.
Economy-energy-environment models are the mainstay of economic assessments of policies to reduce carbon dioxide (CO2) emissions, yet their empirical basis is often criticized as being weak. This thesis addresses these limitations by constructing econometrically calibrated models in two policy areas. The first is a 35-sector computable general equilibrium (CGE) model of the U.S. economy which analyzes the uncertain impacts of CO2 emission abatement. Econometric modeling of sectors' nested constant elasticity of substitution (CES) cost functions based on a 45-year price-quantity dataset yields estimates of capital-labor-energy-material input substitution elasticities and biases of technical change that are incorporated into the CGE model. I use the estimated standard errors and variance-covariance matrices to construct the joint distribution of the parameters of the economy's supply side, which I sample to perform Monte Carlo baseline and counterfactual runs of the model. The resulting probabilistic abatement cost estimates highlight the importance of the uncertainty in baseline emissions growth. The second model is an equilibrium simulation of the market for new vehicles which I use to assess the response of vehicle prices, sales and mileage to CO2 taxes and increased corporate average fuel economy (CAFE) standards. I specify an econometric model of a representative consumer's vehicle preferences using a nested CES expenditure function which incorporates mileage and other characteristics in addition to prices, and develop a novel calibration algorithm to link this structure to vehicle model supplies by manufacturers engaged in Bertrand competition. CO2 taxes' effects on gasoline prices reduce vehicle sales and manufacturers' profits if vehicles' mileage is fixed, but these losses shrink once mileage can be adjusted. Accelerated CAFE standards induce manufacturers to pay fines for noncompliance rather than incur the higher costs of radical mileage improvements
Pisano, Aurora; Weichert, Dieter
2015-01-01
Articles in this book examine various materials and how to determine directly the limit state of a structure, in the sense of limit analysis and shakedown analysis. Apart from classical applications in mechanical and civil engineering contexts, the book reports on the emerging field of material design beyond the elastic limit, which has further industrial design and technological applications. Readers will discover that “Direct Methods” and the techniques presented here can in fact be used to numerically estimate the strength of structured materials such as composites or nano-materials, which represent fruitful fields of future applications. Leading researchers outline the latest computational tools and optimization techniques and explore the possibility of obtaining information on the limit state of a structure whose post-elastic loading path and constitutive behavior are not well defined or well known. Readers will discover how Direct Methods allow rapid and direct access to requested information in...
Climate Modeling Computing Needs Assessment
Petraska, K. E.; McCabe, J. D.
2011-12-01
This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.
Vezér, Martin A
2016-04-01
To study climate change, scientists employ computer models, which approximate target systems with various levels of skill. Given the imperfection of climate models, how do scientists use simulations to generate knowledge about the causes of observed climate change? Addressing a similar question in the context of biological modelling, Levins (1966) proposed an account grounded in robustness analysis. Recent philosophical discussions dispute the confirmatory power of robustness, raising the question of how the results of computer modelling studies contribute to the body of evidence supporting hypotheses about climate change. Expanding on Staley's (2004) distinction between evidential strength and security, and Lloyd's (2015) argument connecting variety-of-evidence inferences and robustness analysis, I address this question with respect to recent challenges to the epistemology robustness analysis. Applying this epistemology to case studies of climate change, I argue that, despite imperfections in climate models, and epistemic constraints on variety-of-evidence reasoning and robustness analysis, this framework accounts for the strength and security of evidence supporting climatological inferences, including the finding that global warming is occurring and its primary causes are anthropogenic. Copyright © 2016 Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Kinch, K.M.; Merrison, J.P.; Gunnlaugsson, H.P.
2006-01-01
Motivated by questions raised by the magnetic properties experiments on the NASA Mars Pathfinder and Mars Exploration Rover (MER) missions, we have studied in detail the capture of airborne magnetic dust by permanent magnets using a computational fluid dynamics (CFD) model supported by laboratory...... simulations. The magnets studied are identical to the capture magnet and filter magnet on MER, though results are more generally applicable. The dust capture process is found to be dependent upon wind speed, dust magnetization, dust grain size and dust grain mass density. Here we develop an understanding...... of how these parameters affect dust capture rates and patterns on the magnets and set bounds for these parameters based on MER data and results from the numerical model. This results in a consistent picture of the dust as containing varying amounts of at least two separate components with different...
Exploratory analysis regarding the domain definitions for computer based analytical models
Raicu, A.; Oanta, E.; Barhalescu, M.
2017-08-01
Our previous computer based studies dedicated to structural problems using analytical methods defined the composite cross section of a beam as a result of Boolean operations with so-called ‘simple’ shapes. Using generalisations, in the class of the ‘simple’ shapes were included areas bounded by curves approximated using spline functions and areas approximated as polygons. However, particular definitions lead to particular solutions. In order to ascend above the actual limitations, we conceived a general definition of the cross sections that are considered now calculus domains consisting of several subdomains. The according set of input data use complex parameterizations. This new vision allows us to naturally assign a general number of attributes to the subdomains. In this way there may be modelled new phenomena that use map-wise information, such as the metal alloys equilibrium diagrams. The hierarchy of the input data text files that use the comma-separated-value format and their structure are also presented and discussed in the paper. This new approach allows us to reuse the concepts and part of the data processing software instruments already developed. The according software to be subsequently developed will be modularised and generalised in order to be used in the upcoming projects that require rapid development of computer based models.
Directory of Open Access Journals (Sweden)
Farahmand F
2000-08-01
Full Text Available Computer model of the patellofemoral joint was developed and the effects on the anterior displacement of the tibial tuberosity were investigated. The input geometrical and verification data for the model were obtained form an experimental study on a cadaver knee, mounted in an instron machine. The computer program found the configuration of the patellofemoral joint which satified both the geometrical and force equilibrium conditions, simultaneously, using a trial graphical approach.verification of the model was achieved by determining the patellar sagittal plane motion and patellofemoral contact locations and comparing the results with the experimental results of the same specimen and published data. Simulation of the anterior displacement of the tibial tuberosity by the model showed that the location of contact area migrates distally on the femur and proximally on the patella following operation. The contact force of the patellofemoral joint decreased significantly by 70% at full extension, 30% at 30 degrees flexion and around 15% at higher flexion angles for a 1 cm anterior displacement of the tibial tuberosity and nearly doubled for a 2cm anterior displacement. The change of the effective moment are of the quadriceps was not considerable. The results suggest that the major effect of the Maquet operation on the contact force appears in extension and mid-flexion rather than deep flexion amgles. Further displacement of the tuberosity enhances the reduction of the contact force, however, the total reduction is less than what was predicted by Maquet. The change of the contact location relieves pain in short term but causes hyperpressure in the proximal retropatellar surface which might be detrimental in long term
Energetic-economic dynamic computational analysis of plants with small capacity - Gera model
International Nuclear Information System (INIS)
Storfer, A.F.; Demanboro, A.C. de; Campello, C.A.G.B.
1990-01-01
A methodology and mathematical model for energy and economic analysis of hydroelectric power plants with low and medium capacity are presented. This methodology will be used for isolated or integrated hydroelectric power plants, including plants that part of their produced energy will go to the site market and part will go to regional electric system. (author)
Lindemann, J.P.; Kern, R.; Hateren, J.H. van; Ritter, H.; Egelhaaf, M.
2005-01-01
For many animals, including humans, the optic flow generated on the eyes during locomotion is an important source of information about self-motion and the structure of the environment. The blowfly has been used frequently as a model system for experimental analysis of optic flow processing at the
Formation of the Actor's/Speaker's Formant: A Study Applying Spectrum Analysis and Computer Modeling
Czech Academy of Sciences Publication Activity Database
Leino, T.; Laukkanen, A. M.; Radolf, Vojtěch
2011-01-01
Roč. 25, č. 2 (2011), s. 150-158 ISSN 0892-1997 R&D Projects: GA ČR GA101/08/1155 Institutional research plan: CEZ:AV0Z20760514 Keywords : vocal exercising * voice quality * spectrum analysis * mathematical modeling Subject RIV: BI - Acoustics Impact factor: 1.390, year: 2011
MoManI: a tool to facilitate research, analysis, and teaching of computer models
Howells, Mark; Pelakauskas, Martynas; Almulla, Youssef; Tkaczyk, Alan H.; Zepeda, Eduardo
2017-04-01
Allocating limited resource efficiently is a task to which efficient planning and policy design aspires. This may be a non-trivial task. For example, the seventh sustainable development goal (SDG) of Agenda 2030 is to provide access to affordable sustainable energy to all. On the one hand, energy is required to realise almost all other SDGs. (A clinic requires electricity for fridges to store vaccines for maternal health, irrigate agriculture requires energy to pump water to crops in dry periods etc.) On the other hand, the energy system is non-trivial. It requires the mapping of resource, its conversion into useable energy and then into machines that we use to meet our needs. That requires new tools that draw from standard techniques, best-in-class models and allow the analyst to develop new models. Thus we present the Model Management Infrastructure (MoManI). MoManI is used to develop, manage, run, store input and results data for linear programming models. MoManI, is a browser-based open source interface for systems modelling. It is available to various user audiences, from policy makers and planners through to academics. For example, we implement the Open Source energy Modelling System (OSeMOSYS) in MoManI. OSeMOSYS is a specialized energy model generator. A typical OSeMOSYS model would represent the current energy system of a country, region or city; in it, equations and constraints are specified; and calibrated to a base year. From that future technologies and policy options are represented. From those scenarios are designed and run. Efficient allocation of energy resource and expenditure on technology is calculated. Finally, results are visualized. At present this is done in relatively rigid interfaces or via (for some) cumbersome text files. Implementing and operating OSeMOSYS in MoManI shortens the learning curve and reduces phobia associated with the complexity of computer modelling, thereby supporting effective capacity building activities. The novel
Guruprasad, R.; Behera, B. K.
2015-10-01
Quantitative prediction of fabric mechanical properties is an essential requirement for design engineering of textile and apparel products. In this work, the possibility of prediction of bending rigidity of cotton woven fabrics has been explored with the application of Artificial Neural Network (ANN) and two hybrid methodologies, namely Neuro-genetic modeling and Adaptive Neuro-Fuzzy Inference System (ANFIS) modeling. For this purpose, a set of cotton woven grey fabrics was desized, scoured and relaxed. The fabrics were then conditioned and tested for bending properties. With the database thus created, a neural network model was first developed using back propagation as the learning algorithm. The second model was developed by applying a hybrid learning strategy, in which genetic algorithm was first used as a learning algorithm to optimize the number of neurons and connection weights of the neural network. The Genetic algorithm optimized network structure was further allowed to learn using back propagation algorithm. In the third model, an ANFIS modeling approach was attempted to map the input-output data. The prediction performances of the models were compared and a sensitivity analysis was reported. The results show that the prediction by neuro-genetic and ANFIS models were better in comparison with that of back propagation neural network model.
Lee, Chia-Fone; Chen, Peir-Rong; Lee, Wen-Jeng; Chen, Jyh-Horng; Liu, Tien-Chen
2006-05-01
To present a systematic and practical approach that uses high-resolution computed tomography to derive models of the middle ear for finite element analysis. This prospective study included 31 subjects with normal hearing and no previous otologic disorders. Temporal bone images obtained from 15 right ears and 16 left ears were used for evaluation and reconstruction. High-resolution computed tomography of temporal bone was performed using simultaneous acquisition of 16 sections with a collimated slice thickness of 0.625 mm. All images were transferred to an Amira visualization system for three-dimensional reconstruction. The created three-dimensional model was translated into two commercial modeling packages, Patran and ANSYS, for finite element analysis. The characteristic dimensions of the model were measured and compared with previously published histologic section data. This result confirms that the geometric model created by the proposed method is accurate except that the tympanic membrane is thicker than when measured by the histologic section method. No obvious difference in the geometrical dimension between right and left ossicles was found (P > .05). The three-dimensional model created by finite element method and predicted umbo and stapes displacements are close to the bounds of the experimental curves of Nishihara's, Huber's, Gan's, and Sun's data across the frequency range of 100 to 8000 Hz. The model includes a description of the geometry of the middle ear components and dynamic equations of vibration. The proposed method is quick, practical, low-cost, and, most importantly, noninvasive as compared with histologic section methods.
Directory of Open Access Journals (Sweden)
Laura Trotta
Full Text Available Bistable dynamical switches are frequently encountered in mathematical modeling of biological systems because binary decisions are at the core of many cellular processes. Bistable switches present two stable steady-states, each of them corresponding to a distinct decision. In response to a transient signal, the system can flip back and forth between these two stable steady-states, switching between both decisions. Understanding which parameters and states affect this switch between stable states may shed light on the mechanisms underlying the decision-making process. Yet, answering such a question involves analyzing the global dynamical (i.e., transient behavior of a nonlinear, possibly high dimensional model. In this paper, we show how a local analysis at a particular equilibrium point of bistable systems is highly relevant to understand the global properties of the switching system. The local analysis is performed at the saddle point, an often disregarded equilibrium point of bistable models but which is shown to be a key ruler of the decision-making process. Results are illustrated on three previously published models of biological switches: two models of apoptosis, the programmed cell death and one model of long-term potentiation, a phenomenon underlying synaptic plasticity.
Laursen, Tod A
2003-01-01
This book comprehensively treats the formulation and finite element approximation of contact and impact problems in nonlinear mechanics. Intended for students, researchers and practitioners interested in numerical solid and structural analysis, as well as for engineers and scientists dealing with technologies in which tribological response must be characterized, the book includes an introductory but detailed overview of nonlinear finite element formulations before dealing with contact and impact specifically. Topics encompassed include the continuum mechanics, mathematical structure, variational framework, and finite element implementations associated with contact/impact interaction. Additionally, important and currently emerging research topics in computational contact mechanics are introduced, encompassing such topics as tribological complexity, conservative treatment of inelastic impact interaction, and novel spatial discretization strategies.
Kent, Alexander R; Min, Xiaoyi; Hogan, Quinn H; Kramer, Jeffery M
2018-04-01
The mechanisms of dorsal root ganglion (DRG) stimulation for chronic pain remain unclear. The objective of this work was to explore the neurophysiological effects of DRG stimulation using computational modeling. Electrical fields produced during DRG stimulation were calculated with finite element models, and were coupled to a validated biophysical model of a C-type primary sensory neuron. Intrinsic neuronal activity was introduced as a 4 Hz afferent signal or somatic ectopic firing. The transmembrane potential was measured along the neuron to determine the effect of stimulation on intrinsic activity across stimulation parameters, cell location/orientation, and membrane properties. The model was validated by showing close correspondence in action potential (AP) characteristics and firing patterns when compared to experimental measurements. Subsequently, the model output demonstrated that T-junction filtering was amplified with DRG stimulation, thereby blocking afferent signaling, with cathodic stimulation at amplitudes of 2.8-5.5 × stimulation threshold and frequencies above 2 Hz. This amplified filtering was dependent on the presence of calcium and calcium-dependent small-conductance potassium channels, which produced a hyperpolarization offset in the soma, stem, and T-junction with repeated somatic APs during stimulation. Additionally, DRG stimulation suppressed somatic ectopic activity by hyperpolarizing the soma with cathodic or anodic stimulation at amplitudes of 3-11 × threshold and frequencies above 2 Hz. These effects were dependent on the stem axon being relatively close to and oriented toward a stimulating contact. These results align with the working hypotheses on the mechanisms of DRG stimulation, and indicate the importance of stimulation amplitude, polarity, and cell location/orientation on neuronal responses. © 2018 International Neuromodulation Society.
Energy Technology Data Exchange (ETDEWEB)
Sinclair, T. J.E. [Dames and Moore, London, England, United Kingdom; Shillabeer, J. H. [Dames and Moore, Toronto (Canada); Herget, G. [CANMET, Ottawa (Canada)
1980-05-15
This paper describes the application of a computer model to the analysis of backfill stability in pillar recovery operations with particular reference to two case studies. An explicit finite difference computer program was developed for the purpose of modelling the three-dimensional interaction of rock and backfill in underground excavations. Of particular interest was the mechanics of stress transfer from the rock mass to the pillars and then the backfill. The need, therefore, for a model to allow for the three-dimensional effects and the sequence of operations is evident. The paper gives a brief description of the computer program, descriptions of the mines, the sequences of operations and how they were modelled, and the results of the analyses in graphical form. For both case studies, failure of the backfill was predicted at certain stages. Subsequent reports from the mines indicate that such failures did not occur at the relevant stage. The paper discusses the validity of the model and concludes that the approach accurately represents the principles of rock mechanics in cut-and-fill mining and that further research should be directed towards determining the input parameters to an equal degree of sophistication.
International Conference on Computational Intelligence, Cyber Security, and Computational Models
Ramasamy, Vijayalakshmi; Sheen, Shina; Veeramani, C; Bonato, Anthony; Batten, Lynn
2016-01-01
This book aims at promoting high-quality research by researchers and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security, and Computational Models ICC3 2015 organized by PSG College of Technology, Coimbatore, India during December 17 – 19, 2015. This book enriches with innovations in broad areas of research like computational modeling, computational intelligence and cyber security. These emerging inter disciplinary research areas have helped to solve multifaceted problems and gained lot of attention in recent years. This encompasses theory and applications, to provide design, analysis and modeling of the aforementioned key areas.
A Computer Model for the Hydraulic Analysis of Open Channel Cross Sections
Directory of Open Access Journals (Sweden)
W. H. Shayya
1996-01-01
Full Text Available Irrigation and hydraulic engineers are often faced with the difficulty of tedious trial solutions of the Manning equation to determine the various geometric elements of open channels. This paper addresses the development of a computer model for the design of the most commonly used channel-sections. The developed model is intended as an educational tool. It may be applied to the hydraulic design of trapezoidal , rectangular, triangular, parabolic, round-concered rectangular, and circular cross sections. Two procedures were utilized for the solution of the encountered implicit equations; the Newton-Raphson and the Regula-Falsi methods. In order to initiate the solution process , these methods require one and two initial guesses, respectively. Tge result revealed that the Regula-Flasi method required more iterations to coverage to the solution compared to the Newton-Raphson method, irrespective of the nearness of the initial guess to the actual solution. The average number of iterations for the Regula-Falsi method was approximately three times that of the Newton-Raphson method.
A Sensitivity Analysis of a Computer Model-Based Leak Detection System for Oil Pipelines
Directory of Open Access Journals (Sweden)
Zhe Lu
2017-08-01
Full Text Available Improving leak detection capability to eliminate undetected releases is an area of focus for the energy pipeline industry, and the pipeline companies are working to improve existing methods for monitoring their pipelines. Computer model-based leak detection methods that detect leaks by analyzing the pipeline hydraulic state have been widely employed in the industry, but their effectiveness in practical applications is often challenged by real-world uncertainties. This study quantitatively assessed the effects of uncertainties on leak detectability of a commonly used real-time transient model-based leak detection system. Uncertainties in fluid properties, field sensors, and the data acquisition system were evaluated. Errors were introduced into the input variables of the leak detection system individually and collectively, and the changes in leak detectability caused by the uncertainties were quantified using simulated leaks. This study provides valuable quantitative results contributing towards a better understanding of how real-world uncertainties affect leak detection. A general ranking of the importance of the uncertainty sources was obtained: from high to low it is time skew, bulk modulus error, viscosity error, and polling time. It was also shown that inertia-dominated pipeline systems were less sensitive to uncertainties compared to friction-dominated systems.
Directory of Open Access Journals (Sweden)
Nerea Mangado
2016-11-01
Full Text Available Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.
The Use Of Computational Human Performance Modeling As Task Analysis Tool
Energy Technology Data Exchange (ETDEWEB)
Jacuqes Hugo; David Gertman
2012-07-01
During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employed to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.
Directory of Open Access Journals (Sweden)
Nan-Hung Hsieh
2018-06-01
Full Text Available Traditionally, the solution to reduce parameter dimensionality in a physiologically-based pharmacokinetic (PBPK model is through expert judgment. However, this approach may lead to bias in parameter estimates and model predictions if important parameters are fixed at uncertain or inappropriate values. The purpose of this study was to explore the application of global sensitivity analysis (GSA to ascertain which parameters in the PBPK model are non-influential, and therefore can be assigned fixed values in Bayesian parameter estimation with minimal bias. We compared the elementary effect-based Morris method and three variance-based Sobol indices in their ability to distinguish “influential” parameters to be estimated and “non-influential” parameters to be fixed. We illustrated this approach using a published human PBPK model for acetaminophen (APAP and its two primary metabolites APAP-glucuronide and APAP-sulfate. We first applied GSA to the original published model, comparing Bayesian model calibration results using all the 21 originally calibrated model parameters (OMP, determined by “expert judgment”-based approach vs. the subset of original influential parameters (OIP, determined by GSA from the OMP. We then applied GSA to all the PBPK parameters, including those fixed in the published model, comparing the model calibration results using this full set of 58 model parameters (FMP vs. the full set influential parameters (FIP, determined by GSA from FMP. We also examined the impact of different cut-off points to distinguish the influential and non-influential parameters. We found that Sobol indices calculated by eFAST provided the best combination of reliability (consistency with other variance-based methods and efficiency (lowest computational cost to achieve convergence in identifying influential parameters. We identified several originally calibrated parameters that were not influential, and could be fixed to improve computational
Ignatova, Zoya; Zimmermann, Karl-Heinz
2008-01-01
In this excellent text, the reader is given a comprehensive introduction to the field of DNA computing. The book emphasizes computational methods to tackle central problems of DNA computing, such as controlling living cells, building patterns, and generating nanomachines.
Ihekwaba, Adaoha E C; Mura, Ivan; Barker, Gary C
2014-10-24
Bacterial spores are important contaminants in food, and the spore forming bacteria are often implicated in food safety and food quality considerations. Spore formation is a complex developmental process involving the expression of more than 500 genes over the course of 6 to 8 hrs. The process culminates in the formation of resting cells capable of resisting environmental extremes and remaining dormant for long periods of time, germinating when conditions promote further vegetative growth. Experimental observations of sporulation and germination are problematic and time consuming so that reliable models are an invaluable asset in terms of prediction and risk assessment. In this report we develop a model which assists in the interpretation of sporulation dynamics. This paper defines and analyses a mathematical model for the network regulating Bacillus subtilis sporulation initiation, from sensing of sporulation signals down to the activation of the early genes under control of the master regulator Spo0A. Our model summarises and extends other published modelling studies, by allowing the user to execute sporulation initiation in a scenario where Isopropyl β-D-1-thiogalactopyranoside (IPTG) is used as an artificial sporulation initiator as well as in modelling the induction of sporulation in wild-type cells. The analysis of the model results and the comparison with experimental data indicate that the model is good at predicting inducible responses to sporulation signals. However, the model is unable to reproduce experimentally observed accumulation of phosphorelay sporulation proteins in wild type B. subtilis. This model also highlights that the phosphorelay sub-component, which relays the signals detected by the sensor kinases to the master regulator Spo0A, is crucial in determining the response dynamics of the system. We show that there is a complex connectivity between the phosphorelay features and the master regulatory Spo0A. Additional we discovered that the
International Nuclear Information System (INIS)
Marrel, A.
2008-01-01
In the studies of environmental transfer and risk assessment, numerical models are used to simulate, understand and predict the transfer of pollutant. These computer codes can depend on a high number of uncertain input parameters (geophysical variables, chemical parameters, etc.) and can be often too computer time expensive. To conduct uncertainty propagation studies and to measure the importance of each input on the response variability, the computer code has to be approximated by a meta model which is build on an acceptable number of simulations of the code and requires a negligible calculation time. We focused our research work on the use of Gaussian process meta model to make the sensitivity analysis of the code. We proposed a methodology with estimation and input selection procedures in order to build the meta model in the case of a high number of inputs and with few simulations available. Then, we compared two approaches to compute the sensitivity indices with the meta model and proposed an algorithm to build prediction intervals for these indices. Afterwards, we were interested in the choice of the code simulations. We studied the influence of different sampling strategies on the predictiveness of the Gaussian process meta model. Finally, we extended our statistical tools to a functional output of a computer code. We combined a decomposition on a wavelet basis with the Gaussian process modelling before computing the functional sensitivity indices. All the tools and statistical methodologies that we developed were applied to the real case of a complex hydrogeological computer code, simulating radionuclide transport in groundwater. (author) [fr
Directory of Open Access Journals (Sweden)
Jiqing Li
2018-04-01
Full Text Available GIS (Geographic Information System can be used to combine multiple hydrologic data and geographic data for FIA (Flood Impact Assessment. For a developing country like China, a lot of geographic data is in the CAD (Computer Aided Design format. The commonly used method for converting CAD into DEM may result in data loss. This paper introduces a solution for the conversion between CAD data and DEM data. The method has been applied to the FIA based on the topographic map of CAD in Hanjiang River. When compared with the other method, the new method solves the data loss problem. Besides, the paper use GIS to simulate the inundation range, area, and the depth distribution of flood backwater. Based on the analysis, the author concludes: (1 the differences of the inundation areas between the flood of HQ100 and the flood of HQ50 are small. (2 The inundation depth shows a decreasing trend along the upstream of the river. (3 The inundation area less than 4 m in flood of HQ50 is larger than that in flood of HQ100, the result is opposite when the inundation depth is greater than 4 m. (4 The flood loss is 392.32 million RMB for flood of HQ50 and 610.02 million RMB for flood of HQ100. The method can be applied to FIA.
Directory of Open Access Journals (Sweden)
Guohua Fang
2016-09-01
Full Text Available To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and output sources of the National Economic Production Department. Secondly, an extended Social Accounting Matrix (SAM of Jiangsu province is developed to simulate various scenarios. By changing values of the discharge fees (increased by 50%, 100% and 150%, three scenarios are simulated to examine their influence on the overall economy and each industry. The simulation results show that an increased fee will have a negative impact on Gross Domestic Product (GDP. However, waste water may be effectively controlled. Also, this study demonstrates that along with the economic costs, the increase of the discharge fee will lead to the upgrading of industrial structures from a situation of heavy pollution to one of light pollution which is beneficial to the sustainable development of the economy and the protection of the environment.
International Nuclear Information System (INIS)
Iooss, B.
2009-01-01
The present document constitutes my Habilitation thesis report. It recalls my scientific activity of the twelve last years, since my PhD thesis until the works completed as a research engineer at CEA Cadarache. The two main chapters of this document correspond to two different research fields both referring to the uncertainty treatment in engineering problems. The first chapter establishes a synthesis of my work on high frequency wave propagation in random medium. It more specifically relates to the study of the statistical fluctuations of acoustic wave travel-times in random and/or turbulent media. The new results mainly concern the introduction of the velocity field statistical anisotropy in the analytical expressions of the travel-time statistical moments according to those of the velocity field. This work was primarily carried by requirements in geophysics (oil exploration and seismology). The second chapter is concerned by the probabilistic techniques to study the effect of input variables uncertainties in numerical models. My main applications in this chapter relate to the nuclear engineering domain which offers a large variety of uncertainty problems to be treated. First of all, a complete synthesis is carried out on the statistical methods of sensitivity analysis and global exploration of numerical models. The construction and the use of a meta-model (inexpensive mathematical function replacing an expensive computer code) are then illustrated by my work on the Gaussian process model (kriging). Two additional topics are finally approached: the high quantile estimation of a computer code output and the analysis of stochastic computer codes. We conclude this memory with some perspectives about the numerical simulation and the use of predictive models in industry. This context is extremely positive for future researches and application developments. (author)
Energy Technology Data Exchange (ETDEWEB)
Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com [Orange Labs, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée (France); Sudret, Bruno [ETH Zürich, Chair of Risk, Safety and Uncertainty Quantification, Stefano-Franscini-Platz 5, 8093 Zürich (Switzerland); Varsier, Nadège [Orange Labs, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Picon, Odile [ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée (France); Wiart, Joe [Orange Labs, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France)
2015-04-01
In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representation of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.
Parallel computing in enterprise modeling.
Energy Technology Data Exchange (ETDEWEB)
Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.
2008-08-01
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.
Cenek, Martin; Dahl, Spencer K.
2016-11-01
Systems with non-linear dynamics frequently exhibit emergent system behavior, which is important to find and specify rigorously to understand the nature of the modeled phenomena. Through this analysis, it is possible to characterize phenomena such as how systems assemble or dissipate and what behaviors lead to specific final system configurations. Agent Based Modeling (ABM) is one of the modeling techniques used to study the interaction dynamics between a system's agents and its environment. Although the methodology of ABM construction is well understood and practiced, there are no computational, statistically rigorous, comprehensive tools to evaluate an ABM's execution. Often, a human has to observe an ABM's execution in order to analyze how the ABM functions, identify the emergent processes in the agent's behavior, or study a parameter's effect on the system-wide behavior. This paper introduces a new statistically based framework to automatically analyze agents' behavior, identify common system-wide patterns, and record the probability of agents changing their behavior from one pattern of behavior to another. We use network based techniques to analyze the landscape of common behaviors in an ABM's execution. Finally, we test the proposed framework with a series of experiments featuring increasingly emergent behavior. The proposed framework will allow computational comparison of ABM executions, exploration of a model's parameter configuration space, and identification of the behavioral building blocks in a model's dynamics.
Plasticity modeling & computation
Borja, Ronaldo I
2013-01-01
There have been many excellent books written on the subject of plastic deformation in solids, but rarely can one find a textbook on this subject. “Plasticity Modeling & Computation” is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids. It adopts a simple narrative style that is not mathematically overbearing, and has been written to emulate a professor giving a lecture on this subject inside a classroom. Each section is written to provide a balance between the relevant equations and the explanations behind them. Where relevant, sections end with one or more exercises designed to reinforce the understanding of the “lecture.” Color figures enhance the presentation and make the book very pleasant to read. For professors planning to use this textbook for their classes, the contents are sufficient for Parts A and B that can be taught in sequence over a period of two semesters or quarters.
Analysis of the high frequency longitudinal instability of bunched beams using a computer model
International Nuclear Information System (INIS)
Messerschmid, E.; Month, M.
1976-01-01
The effects of high frequency longitudinal forces on bunched beams are investigated using a computer model. These forces are thought to arise from the transfer of energy between the beam and various structures in the vacuum chamber, this coupling being characterized by a longitudinal impedance function. The simulation is performed with a passive cavity-like element. It is found that the instability can be generated if three conditions are fulfilled: (1) the impedance must be sufficiently large, (2) the induced field must have a fast wake, and (3) the frequency of the induced field must be high enough. In particular, it is shown that the coasting beam threshold criterion for the longitudinal impedance accurately describes the onset of instability, if local values along the bunch of energy spread and current are used. It is also found that the very fast initial growth rate is in good agreement with linear theory and that the coasting beam overshoot expression may be used as a rough guide of the limiting growth for unstable bunches. Concerning the wake field, it is shown how the instability tends to disappear as the fields persist longer. It is furthermore demonstrated that as the wavelength of the unstable mode is increased, initially unstable conditions begin to weaken and vanish. This, it should be emphasized, is primarily a result of the strong correlation between the unstable mode frequency and the time rate of attenuation of the induced fields. ISR parameters are used throughout and a correspondence between the microwave instability observed in the ISR bunches and the simulated instability is suggested. (Auth.)
Gebali, Fayez
2015-01-01
This textbook presents the mathematical theory and techniques necessary for analyzing and modeling high-performance global networks, such as the Internet. The three main building blocks of high-performance networks are links, switching equipment connecting the links together, and software employed at the end nodes and intermediate switches. This book provides the basic techniques for modeling and analyzing these last two components. Topics covered include, but are not limited to: Markov chains and queuing analysis, traffic modeling, interconnection networks and switch architectures and buffering strategies. · Provides techniques for modeling and analysis of network software and switching equipment; · Discusses design options used to build efficient switching equipment; · Includes many worked examples of the application of discrete-time Markov chains to communication systems; · Covers the mathematical theory and techniques necessary for ana...
Analysis and modeling of social influence in high performance computing workloads
Zheng, Shuai; Shae, Zon Yin; Zhang, Xiangliang; Jamjoom, Hani T.; Fong, Liana
2011-01-01
Social influence among users (e.g., collaboration on a project) creates bursty behavior in the underlying high performance computing (HPC) workloads. Using representative HPC and cluster workload logs, this paper identifies, analyzes, and quantifies
Pritesh Jain; Vaishali Chourey; Dheeraj Rane
2011-01-01
Cloud Computing has emerged as a major information and communications technology trend and has been proved as a key technology for market development and analysis for the users of several field. The practice of computing across two or more data centers separated by the Internet is growing in popularity due to an explosion in scalable computing demands. However, one of the major challenges that faces the cloud computing is how to secure and protect the data and processes the data of the user. ...
Models of optical quantum computing
Directory of Open Access Journals (Sweden)
Krovi Hari
2017-03-01
Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.
Cosmic logic: a computational model
International Nuclear Information System (INIS)
Vanchurin, Vitaly
2016-01-01
We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps
The Fermilab central computing facility architectural model
International Nuclear Information System (INIS)
Nicholls, J.
1989-01-01
The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)
The Fermilab Central Computing Facility architectural model
International Nuclear Information System (INIS)
Nicholls, J.
1989-05-01
The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs
Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare
2017-07-01
The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.
Directory of Open Access Journals (Sweden)
Vanessa Almendro
2014-02-01
Full Text Available Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and posttreatment samples. We also observed significant changes in the spatial distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution.
International Nuclear Information System (INIS)
Almendro, Vanessa; Cheng, Yu-Kang; Randles, Amanda; Itzkovitz, Shalev; Marusyk, Andriy; Ametller, Elisabet; Gonzalez-Farre, Xavier; Muñoz, Montse; Russnes, Hege G.; Helland, Åslaug; Rye, Inga H.; Borresen-Dale, Anne-Lise; Maruyama, Reo; Van Oudenaarden, Alexander; Dowsett, Mitchell; Jones, Robin L.; Reis-Filho, Jorge; Gascon, Pere; Gönen, Mithat; Michor, Franziska; Polyak, Kornelia
2014-01-01
Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and post-treatment samples. We also observed significant changes in the spatial distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution
Funamoto, Kenichi; Hayase, Toshiyuki; Shirai, Atsushi
Simplified two-dimensional flow analysis is performed in order to simulate frictional characteristics measurement of red blood cells moving on a glass plate in a medium with an inclined centrifuge microscope. Computation under various conditions reveals the influences of parameters on lift, drag, and moment acting on a red blood cell. Among these forces, lift appears only when the cell is longitudinally asymmetric. By considering the balance of forces, the frictional characteristics of the red blood cell are modeled as the sum of Coulomb friction and viscous drag. The model describes the possibility that the red blood cell deforms to expand in the front side in response to the inclined centrifugal force. When velocity exceeds some critical value, the lift overcomes the normal centrifugal force component, and the thickness of the plasma layer between the cell and the glass plate increases from the initial value of the plasma protein thickness.
Energy Technology Data Exchange (ETDEWEB)
Lee, Chang Hoon; Baek, Sang Yeup; Shin, In Sup; Moon, Shin Myung; Moon, Jae Phil; Koo, Hoon Young; Kim, Ju Shin [Seoul National University, Seoul (Korea, Republic of); Hong, Jung Sik [Seoul National Polytechnology University, Seoul (Korea, Republic of); Lim, Tae Jin [Soongsil University, Seoul (Korea, Republic of)
1996-08-01
The objective of this project is to develop a methodology of the dynamic reliability analysis for NPP. The first year`s research was focused on developing a procedure for analyzing failure data of running components and a simulator for estimating the reliability of series-parallel structures. The second year`s research was concentrated on estimating the lifetime distribution and PM effect of a component from its failure data in various cases, and the lifetime distribution of a system with a particular structure. Computer codes for performing these jobs were also developed. The objectives of the third year`s research is to develop models for analyzing special failure types (CCFs, Standby redundant structure) that were nor considered in the first two years, and to complete a methodology of the dynamic reliability analysis for nuclear power plants. The analysis of failure data of components and related researches for supporting the simulator must be preceded for providing proper input to the simulator. Thus this research is divided into three major parts. 1. Analysis of the time dependent life distribution and the PM effect. 2. Development of a simulator for system reliability analysis. 3. Related researches for supporting the simulator : accelerated simulation analytic approach using PH-type distribution, analysis for dynamic repair effects. 154 refs., 5 tabs., 87 figs. (author)
Computational system for geostatistical analysis
Directory of Open Access Journals (Sweden)
Vendrusculo Laurimar Gonçalves
2004-01-01
Full Text Available Geostatistics identifies the spatial structure of variables representing several phenomena and its use is becoming more intense in agricultural activities. This paper describes a computer program, based on Windows Interfaces (Borland Delphi, which performs spatial analyses of datasets through geostatistic tools: Classical statistical calculations, average, cross- and directional semivariograms, simple kriging estimates and jackknifing calculations. A published dataset of soil Carbon and Nitrogen was used to validate the system. The system was useful for the geostatistical analysis process, for the manipulation of the computational routines in a MS-DOS environment. The Windows development approach allowed the user to model the semivariogram graphically with a major degree of interaction, functionality rarely available in similar programs. Given its characteristic of quick prototypation and simplicity when incorporating correlated routines, the Delphi environment presents the main advantage of permitting the evolution of this system.
International Nuclear Information System (INIS)
Cisilino, Adrian; D'Amico, Diego; Buroni, Federico; Commisso, Pablo; Sammartino, Mario; Capiel, Carlos
2008-01-01
A methodology for the construction of computational models from CT images is presented in this work. Computational models serve for the stress analysis of the bones using the Finite Element Method. The elastic constants of the bone tissue are calculated using the density data obtained in from the CTs. The proposed methodology is demonstrated in the construction of a model for the gleno-humeral joint. (authors) [es
Directory of Open Access Journals (Sweden)
Andrzej Rusek
2008-01-01
Full Text Available The mathematical model of cylindrical linear induction motor (C-LIM fed via frequency converter is presented in the paper. The model was developed in order to analyze numerically the transient states. Problems concerning dynamics of ac-machines especially linear induction motor are presented in [1 – 7]. Development of C-LIM mathematical model is based on circuit method and analogy to rotary induction motor. The analogy between both: (a stator and rotor windings of rotary induction motor and (b winding of primary part of C-LIM (inductor and closed current circuits in external secondary part of C-LIM (race is taken into consideration. The equations of C-LIM mathematical model are presented as matrix together with equations expressing each vector separately. A computational analysis of selected transient states of C-LIM fed via frequency converter is presented in the paper. Two typical examples of C-LIM operation are considered for the analysis: (a starting the motor at various static loads and various synchronous velocities and (b reverse of the motor at the same operation conditions. Results of simulation are presented as transient responses including transient electromagnetic force, transient linear velocity and transient phase current.
Analysis and modeling of social influence in high performance computing workloads
Zheng, Shuai
2011-01-01
Social influence among users (e.g., collaboration on a project) creates bursty behavior in the underlying high performance computing (HPC) workloads. Using representative HPC and cluster workload logs, this paper identifies, analyzes, and quantifies the level of social influence across HPC users. We show the existence of a social graph that is characterized by a pattern of dominant users and followers. This pattern also follows a power-law distribution, which is consistent with those observed in mainstream social networks. Given its potential impact on HPC workloads prediction and scheduling, we propose a fast-converging, computationally-efficient online learning algorithm for identifying social groups. Extensive evaluation shows that our online algorithm can (1) quickly identify the social relationships by using a small portion of incoming jobs and (2) can efficiently track group evolution over time. © 2011 Springer-Verlag.
Computational-Model-Based Analysis of Context Effects on Harmonic Expectancy
Morimoto, Satoshi; Remijn, Gerard B.; Nakajima, Yoshitaka
2016-01-01
Expectancy for an upcoming musical chord, harmonic expectancy, is supposedly based on automatic activation of tonal knowledge. Since previous studies implicitly relied on interpretations based on Western music theory, the underlying computational processes involved in harmonic expectancy and how it relates to tonality need further clarification. In particular, short chord sequences which cannot lead to unique keys are difficult to interpret in music theory. In this study, we examined effects ...
WASTE-ACC: A computer model for analysis of waste management accidents
International Nuclear Information System (INIS)
Nabelssi, B.K.; Folga, S.; Kohout, E.J.; Mueller, C.J.; Roglans-Ribas, J.
1996-12-01
In support of the U.S. Department of Energy's (DOE's) Waste Management Programmatic Environmental Impact Statement, Argonne National Laboratory has developed WASTE-ACC, a computational framework and integrated PC-based database system, to assess atmospheric releases from facility accidents. WASTE-ACC facilitates the many calculations for the accident analyses necessitated by the numerous combinations of waste types, waste management process technologies, facility locations, and site consolidation strategies in the waste management alternatives across the DOE complex. WASTE-ACC is a comprehensive tool that can effectively test future DOE waste management alternatives and assumptions. The computational framework can access several relational databases to calculate atmospheric releases. The databases contain throughput volumes, waste profiles, treatment process parameters, and accident data such as frequencies of initiators, conditional probabilities of subsequent events, and source term release parameters of the various waste forms under accident stresses. This report describes the computational framework and supporting databases used to conduct accident analyses and to develop source terms to assess potential health impacts that may affect on-site workers and off-site members of the public under various DOE waste management alternatives
Impact analysis on a massively parallel computer
International Nuclear Information System (INIS)
Zacharia, T.; Aramayo, G.A.
1994-01-01
Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper
A physicist's model of computation
International Nuclear Information System (INIS)
Fredkin, E.
1991-01-01
An attempt is presented to make a statement about what a computer is and how it works from the perspective of physics. The single observation that computation can be a reversible process allows for the same kind of insight into computing as was obtained by Carnot's discovery that heat engines could be modelled as reversible processes. It allows us to bring computation into the realm of physics, where the power of physics allows us to ask and answer questions that seemed intractable from the viewpoint of computer science. Strangely enough, this effort makes it clear why computers get cheaper every year. (author) 14 refs., 4 figs
Qualitative and Computational Analysis of a Mathematical Model for Tumor-Immune Interactions
Directory of Open Access Journals (Sweden)
F. A. Rihan
2012-01-01
Full Text Available We provide a family of ordinary and delay differential equations to model the dynamics of tumor-growth and immunotherapy interactions. We explore the effects of adoptive cellular immunotherapy on the model and describe under what circumstances the tumor can be eliminated. The possibility of clearing the tumor, with a strategy, is based on two parameters in the model: the rate of influx of the effector cells and the rate of influx of IL-2. The critical tumor-growth rate, below which endemic tumor does not exist, has been found. One can use the model to make predictions about tumor dormancy.
Liu, Yushi; Poh, Hee Joo
2014-11-01
The Computational Fluid Dynamics analysis has become increasingly important in modern urban planning in order to create highly livable city. This paper presents a multi-scale modeling methodology which couples Weather Research and Forecasting (WRF) Model with open source CFD simulation tool, OpenFOAM. This coupling enables the simulation of the wind flow and pollutant dispersion in urban built-up area with high resolution mesh. In this methodology meso-scale model WRF provides the boundary condition for the micro-scale CFD model OpenFOAM. The advantage is that the realistic weather condition is taken into account in the CFD simulation and complexity of building layout can be handled with ease by meshing utility of OpenFOAM. The result is validated against the Joint Urban 2003 Tracer Field Tests in Oklahoma City and there is reasonably good agreement between the CFD simulation and field observation. The coupling of WRF- OpenFOAM provide urban planners with reliable environmental modeling tool in actual urban built-up area; and it can be further extended with consideration of future weather conditions for the scenario studies on climate change impact.
Computational modeling in biomechanics
Mofrad, Mohammad
2010-01-01
This book provides a glimpse of the diverse and important roles that modern computational technology is playing in various areas of biomechanics. It includes unique chapters on ab initio quantum mechanical, molecular dynamic and scale coupling methods..
Computational biomechanics for medicine imaging, modeling and computing
Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol
2016-01-01
The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.
Directory of Open Access Journals (Sweden)
Hanley Edward N
2003-03-01
Full Text Available Abstract Background The purpose of this study is to present an automated system that analyzes digitized x-ray images of small animal spines identifying the effects of disc degeneration. The age-related disc and spine degeneration that occurs in the sand rat (Psammomys obesus has previously been documented radiologically; selected representative radiographs with age-related changes were used here to develop computer-assisted vertebral visualization/analysis techniques. Techniques presented here have the potential to produce quantitative algorithms that create more accurate and informative measurements in a time efficient manner. Methods Signal and image processing techniques were applied to digitized spine x-ray images the spine was segmented, and orientation and curvature determined. The image was segmented based on orientation changes of the spine; edge detection was performed to define vertebral boundaries. Once vertebrae were identified, a number of measures were introduced and calculated to retrieve information on the vertebral separation/orientation and sclerosis. Results A method is described which produces computer-generated quantitative measurements of vertebrae and disc spaces. Six sand rat spine radiographs illustrate applications of this technique. Results showed that this method can successfully automate calculation and analysis of vertebral length, vertebral spacing, vertebral angle, and can score sclerosis. Techniques also provide quantitative means to explore the relation between age and vertebral shape. Conclusions This method provides a computationally efficient system to analyze spinal changes during aging. Techniques can be used to automate the quantitative processing of vertebral radiographic images and may be applicable to human and other animal radiologic models of the aging/degenerating spine.
Directory of Open Access Journals (Sweden)
Qiang Duan
2015-08-01
Full Text Available The crucial role of networking in Cloud computing calls for a holistic vision of both networking and computing systems that leads to composite network–compute service provisioning. Software-Defined Network (SDN is a fundamental advancement in networking that enables network programmability. SDN and software-defined compute/storage systems form a Software-Defined Cloud Environment (SDCE that may greatly facilitate composite network–compute service provisioning to Cloud users. Therefore, networking and computing systems need to be modeled and analyzed as composite service provisioning systems in order to obtain thorough understanding about service performance in SDCEs. In this paper, a novel approach for modeling composite network–compute service capabilities and a technique for evaluating composite network–compute service performance are developed. The analytic method proposed in this paper is general and agnostic to service implementation technologies; thus is applicable to a wide variety of network–compute services in SDCEs. The results obtained in this paper provide useful guidelines for federated control and management of networking and computing resources to achieve Cloud service performance guarantees.
Energy Technology Data Exchange (ETDEWEB)
Pannala, S; D' Azevedo, E; Zacharia, T
2002-02-26
The goal of the radiation modeling effort was to develop and implement a radiation algorithm that is fast and accurate for the underhood environment. As part of this CRADA, a net-radiation model was chosen to simulate radiative heat transfer in an underhood of a car. The assumptions (diffuse-gray and uniform radiative properties in each element) reduce the problem tremendously and all the view factors for radiation thermal calculations can be calculated once and for all at the beginning of the simulation. The cost for online integration of heat exchanges due to radiation is found to be less than 15% of the baseline CHAD code and thus very manageable. The off-line view factor calculation is constructed to be very modular and has been completely integrated to read CHAD grid files and the output from this code can be read into the latest version of CHAD. Further integration has to be performed to accomplish the same with STAR-CD. The main outcome of this effort is to obtain a highly scalable and portable simulation capability to model view factors for underhood environment (for e.g. a view factor calculation which took 14 hours on a single processor only took 14 minutes on 64 processors). The code has also been validated using a simple test case where analytical solutions are available. This simulation capability gives underhood designers in the automotive companies the ability to account for thermal radiation - which usually is critical in the underhood environment and also turns out to be one of the most computationally expensive components of underhood simulations. This report starts off with the original work plan as elucidated in the proposal in section B. This is followed by Technical work plan to accomplish the goals of the project in section C. In section D, background to the current work is provided with references to the previous efforts this project leverages on. The results are discussed in section 1E. This report ends with conclusions and future scope of
Directory of Open Access Journals (Sweden)
Afroza Khanam Irin
2015-01-01
Full Text Available Neurodegenerative as well as autoimmune diseases have unclear aetiologies, but an increasing number of evidences report for a combination of genetic and epigenetic alterations that predispose for the development of disease. This review examines the major milestones in epigenetics research in the context of diseases and various computational approaches developed in the last decades to unravel new epigenetic modifications. However, there are limited studies that systematically link genetic and epigenetic alterations of DNA to the aetiology of diseases. In this work, we demonstrate how disease-related epigenetic knowledge can be systematically captured and integrated with heterogeneous information into a functional context using Biological Expression Language (BEL. This novel methodology, based on BEL, enables us to integrate epigenetic modifications such as DNA methylation or acetylation of histones into a specific disease network. As an example, we depict the integration of epigenetic and genetic factors in a functional context specific to Parkinson’s disease (PD and Multiple Sclerosis (MS.
Dalee, Robert C.; Bacskay, Allen S.; Knox, James C.
1990-01-01
An overview of the CASE/A-ECLSS series modeling package is presented. CASE/A is an analytical tool that has supplied engineering productivity accomplishments during ECLSS design activities. A components verification program was performed to assure component modeling validity based on test data from the Phase II comparative test program completed at the Marshall Space Flight Center. An integrated plotting feature has been added to the program which allows the operator to analyze on-screen data trends or get hard copy plots from within the CASE/A operating environment. New command features in the areas of schematic, output, and model management, and component data editing have been incorporated to enhance the engineer's productivity during a modeling program.
Computer-Based Linguistic Analysis.
Wright, James R.
Noam Chomsky's transformational-generative grammar model may effectively be translated into an equivalent computer model. Phrase-structure rules and transformations are tested as to their validity and ordering by the computer via the process of random lexical substitution. Errors appearing in the grammar are detected and rectified, and formal…
Energy Technology Data Exchange (ETDEWEB)
Shaw, M; House, R; Williams, W; Haynam, C; White, R; Orth, C; Sacks, R [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA, 94550 (United States)], E-mail: shaw7@llnl.gov
2008-05-15
The National Ignition Facility (NIF) is a stadium-sized facility containing a 192-beam, 1.8 MJ, 500-TW, 351-nm laser system together with a 10-m diameter target chamber with room for many target diagnostics. NIF will be the world's largest laser experimental system, providing a national center to study inertial confinement fusion and the physics of matter at extreme energy densities and pressures. A computational system, the Laser Performance Operations Model (LPOM) has been developed and deployed that automates the laser setup process, and accurately predict laser energetics. LPOM determines the settings of the injection laser system required to achieve the desired main laser output, provides equipment protection, determines the diagnostic setup, and supplies post shot data analysis and reporting.
International Nuclear Information System (INIS)
Kim, Heon Young; Kim, Dong Won
1993-01-01
The objective of the present study is to analyze material flow in the metal forming processes by using computer simulation and experiment with model material, plasticine. A UBET program is developed to analyze the bulk flow behaviour of various metal forming problems. The elemental strain-hardening effect is considered in an incremental manner and the element system is automatically regenerated at every deforming step in the program. The material flow behaviour in closed-die forging process with rib-web type cavity are analyzed by UBET and elastic-plastic finite element method, and verified by experiments with plasticine. There were good agreements between simulation and experiment. The effect of corner rounding on material flow behavior is investigated in the analysis of backward extrusion with square die. Flat punch indentation process is simulated by UBET, and the results are compared with that of elastic-plastic finite element method. (Author)
Modification of the bubble rise model used in RELAP4/Mod5 computer code for transients analysis
International Nuclear Information System (INIS)
Scharfmann, E.
1981-01-01
To improve the separation phase and heat transfer models in RELAP4/MOD5 computer code, in order to make more realistic estimates of the thermohydraulic behavior of the core submitted to a loss-of-coolant accident, is the objective of this work. This research is directed to the accident analysis caused by small breaks in the primary circuit of PWR plants, where two-phase flow occurs most of the time. Calculation have been performed with the help of the original version of RELAP code, and the version containing the proposed modifications on this work. Comparing one results with the original ones, we arrive at the conclusion that our results show more conservative values of core pressure and coolant temperature, while the peak values of fuel temperature are not exceeded. (Author) [pt
International Nuclear Information System (INIS)
Rector, D.R.; Wheeler, C.L.; Lombardo, N.J.
1986-11-01
COBRA-SFS (Spent Fuel Storage) is a general thermal-hydraulic analysis computer code used to predict temperatures and velocities in a wide variety of systems. The code was refined and specialized for spent fuel storage system analyses for the US Department of Energy's Commercial Spent Fuel Management Program. The finite-volume equations governing mass, momentum, and energy conservation are written for an incompressible, single-phase fluid. The flow equations model a wide range of conditions including natural circulation. The energy equations include the effects of solid and fluid conduction, natural convection, and thermal radiation. The COBRA-SFS code is structured to perform both steady-state and transient calculations: however, the transient capability has not yet been validated. This volume describes the finite-volume equations and the method used to solve these equations. It is directed toward the user who is interested in gaining a more complete understanding of these methods
Computer aided analysis of disturbances
International Nuclear Information System (INIS)
Baldeweg, F.; Lindner, A.
1986-01-01
Computer aided analysis of disturbances and the prevention of failures (diagnosis and therapy control) in technological plants belong to the most important tasks of process control. Research in this field is very intensive due to increasing requirements to security and economy of process control and due to a remarkable increase of the efficiency of digital electronics. This publication concerns with analysis of disturbances in complex technological plants, especially in so called high risk processes. The presentation emphasizes theoretical concept of diagnosis and therapy control, modelling of the disturbance behaviour of the technological process and the man-machine-communication integrating artificial intelligence methods, e.g., expert system approach. Application is given for nuclear power plants. (author)
Mathematical Modeling and Computational Thinking
Sanford, John F.; Naidu, Jaideep T.
2017-01-01
The paper argues that mathematical modeling is the essence of computational thinking. Learning a computer language is a valuable assistance in learning logical thinking but of less assistance when learning problem-solving skills. The paper is third in a series and presents some examples of mathematical modeling using spreadsheets at an advanced…
Directory of Open Access Journals (Sweden)
Lafuente Esther M
2010-09-01
Full Text Available Abstract Background Proteasomes play a central role in the major histocompatibility class I (MHCI antigen processing pathway. They conduct the proteolytic degradation of proteins in the cytosol, generating the C-terminus of CD8 T cell epitopes and MHCI-peptide ligands (P1 residue of cleavage site. There are two types of proteasomes, the constitutive form, expressed in most cell types, and the immunoproteasome, which is constitutively expressed in mature dendritic cells. Protective CD8 T cell epitopes are likely generated by the immunoproteasome and the constitutive proteasome, and here we have modeled and analyzed the cleavage by these two proteases. Results We have modeled the immunoproteasome and proteasome cleavage sites upon two non-overlapping sets of peptides consisting of 553 CD8 T cell epitopes, naturally processed and restricted by human MHCI molecules, and 382 peptides eluted from human MHCI molecules, respectively, using N-grams. Cleavage models were generated considering different epitope and MHCI-eluted fragment lengths and the same number of C-terminal flanking residues. Models were evaluated in 5-fold cross-validation. Judging by the Mathew's Correlation Coefficient (MCC, optimal cleavage models for the proteasome (MCC = 0.43 ± 0.07 and the immunoproteasome (MCC = 0.36 ± 0.06 were obtained from 12-residue peptide fragments. Using an independent dataset consisting of 137 HIV1-specific CD8 T cell epitopes, the immunoproteasome and proteasome cleavage models achieved MCC values of 0.30 and 0.18, respectively, comparatively better than those achieved by related methods. Using ROC analyses, we have also shown that, combined with MHCI-peptide binding predictions, cleavage predictions by the immunoproteasome and proteasome models significantly increase the discovery rate of CD8 T cell epitopes restricted by different MHCI molecules, including A*0201, A*0301, A*2402, B*0702, B*2705. Conclusions We have developed models that are specific
COMPUTATIONAL MODELS FOR SUSTAINABLE DEVELOPMENT
Monendra Grover; Rajesh Kumar; Tapan Kumar Mondal; S. Rajkumar
2011-01-01
Genetic erosion is a serious problem and computational models have been developed to prevent it. The computational modeling in this field not only includes (terrestrial) reserve design, but also decision modeling for related problems such as habitat restoration, marine reserve design, and nonreserve approaches to conservation management. Models have been formulated for evaluating tradeoffs between socioeconomic, biophysical, and spatial criteria in establishing marine reserves. The percolatio...
Gulothungan, G.; Malathi, R.
2018-04-01
Disturbed sodium (Na+) and calcium (Ca2+) handling is known to be a major predisposing factor for life-threatening cardiac arrhythmias. Cardiac contractility in ventricular tissue is prominent by Ca2+ channels like voltage dependent Ca2+ channels, sodium-calcium exchanger (Na+-Ca2+x) and sacroplasmicrecticulum (SR) Ca2+ pump and leakage channels. Experimental and clinical possibilities for studying cardiac arrhythmias in human ventricular myocardium are very limited. Therefore, the use of alternative methods such as computer simulations is of great importance. Our aim of this article is to study the impact on action potential (AP) generation and propagation in single ventricular myocyte and ventricular tissue under different dysfunction Ca2+ channels condition. In enhanced activity of Na+-Ca2+x, single myocyte produces AP duration (APD90) and APD50 is significantly smaller (266 ms and 235 ms). Its Na+-Ca2+x current at depolarization is increases 60% from its normal level and repolarization current goes more negative (nonfailing= -0.28 pA/pF and failing= -0.47 pA/pF). Similarly, same enhanced activity of Na+-Ca2+x in 10 mm region of ventricular sheet, raises the plateau potential abruptly, which ultimately affects the diastolic repolarization. Compare with normal ventricular sheet region of 10 mm, 10% of ventricular sheet resting state is reduces and ventricular sheet at time 250 ms is goes to resting state very early. In hypertrophy condition, single myocyte produces APD90 and APD50 is worthy of attention smaller (232 mS and 198 ms). Its sodium-potassium (Na+-K+) pump current is 75% reduces from its control conditions (0.13 pA/pF). Hypertrophy condition, 50% of ventricular sheet is reduces to minimum plateau potential state, that starts the repolarization process very early and reduces the APD. In a single failing SR Ca2+ channels myocyte, recovery of Ca2+ concentration level in SR reduces upto 15% from its control myocytes. At time 290 ms, 70% of ventricular sheet
This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...
Computer-Based Model Calibration and Uncertainty Analysis: Terms and Concepts
2015-07-01
uncertainty analyses throughout the lifecycle of planning, designing, and operating of Civil Works flood risk management projects as described in...Education 140:3–14. Doherty, J. 2004. PEST : Model-independent parameter estimation, User Manual. 5th ed. Brisbane, Queensland, Australia: Watermark
Directory of Open Access Journals (Sweden)
Mohsen Mehrabi
2012-01-01
Full Text Available This study focuses on the behavior of blood flow in the stenosed vessels. Blood is modelled as an incompressible non-Newtonian fluid which is based on the power law viscosity model. A numerical technique based on the finite difference method is developed to simulate the blood flow taking into account the transient periodic behaviour of the blood flow in cardiac cycles. Also, pulsatile blood flow in the stenosed vessel is based on the Womersley model, and fluid flow in the lumen region is governed by the continuity equation and the Navier-Stokes equations. In this study, the stenosis shape is cosine by using Tu and Devil model. Comparing the results obtained from three stenosed vessels with 30%, 50%, and 75% area severity, we find that higher percent-area severity of stenosis leads to higher extrapressure jumps and higher blood speeds around the stenosis site. Also, we observe that the size of the stenosis in stenosed vessels does influence the blood flow. A little change on the cross-sectional value makes vast change on the blood flow rate. This simulation helps the people working in the field of physiological fluid dynamics as well as the medical practitioners.
Czech Academy of Sciences Publication Activity Database
Frost, Miroslav; Benešová, B.; Sedlák, P.
2016-01-01
Roč. 21, č. 3 (2016), s. 358-382 ISSN 1081-2865 R&D Projects: GA ČR GA13-13616S; GA ČR GAP201/10/0357 Institutional support: RVO:61388998 Keywords : shape memory alloys * constitutive model * generalized standard materials * dissipation * energetic solution Subject RIV: BA - General Mathematics Impact factor: 2.953, year: 2016 http://mms.sagepub.com/content/21/3/358
Czech Academy of Sciences Publication Activity Database
Haslinger, J.; Stebel, Jan
2011-01-01
Roč. 63, č. 2 (2011), s. 277-308 ISSN 0095-4616 R&D Projects: GA MŠk LC06052 Institutional research plan: CEZ:AV0Z10190503 Keywords : optimal shape design * paper machine headbox * incompressible non-Newtonian fluid * algebraic turbulence model Subject RIV: BA - General Mathematics Impact factor: 0.952, year: 2011 http://link.springer.com/article/10.1007%2Fs00245-010-9121-x
Modeling and Analysis of Shape with Applications in Computer-aided Diagnosis of Breast Cancer
Guliato, Denise
2011-01-01
Malignant tumors due to breast cancer and masses due to benign disease appear in mammograms with different shape characteristics: the former usually have rough, spiculated, or microlobulated contours, whereas the latter commonly have smooth, round, oval, or macrolobulated contours. Features that characterize shape roughness and complexity can assist in distinguishing between malignant tumors and benign masses. In spite of the established importance of shape factors in the analysis of breast tumors and masses, difficulties exist in obtaining accurate and artifact-free boundaries of the related
Biomedical Imaging and Computational Modeling in Biomechanics
Iacoviello, Daniela
2013-01-01
This book collects the state-of-art and new trends in image analysis and biomechanics. It covers a wide field of scientific and cultural topics, ranging from remodeling of bone tissue under the mechanical stimulus up to optimizing the performance of sports equipment, through the patient-specific modeling in orthopedics, microtomography and its application in oral and implant research, computational modeling in the field of hip prostheses, image based model development and analysis of the human knee joint, kinematics of the hip joint, micro-scale analysis of compositional and mechanical properties of dentin, automated techniques for cervical cell image analysis, and iomedical imaging and computational modeling in cardiovascular disease. The book will be of interest to researchers, Ph.D students, and graduate students with multidisciplinary interests related to image analysis and understanding, medical imaging, biomechanics, simulation and modeling, experimental analysis.
Amir Farbin
The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...
Personal Computer Transport Analysis Program
DiStefano, Frank, III; Wobick, Craig; Chapman, Kirt; McCloud, Peter
2012-01-01
The Personal Computer Transport Analysis Program (PCTAP) is C++ software used for analysis of thermal fluid systems. The program predicts thermal fluid system and component transients. The output consists of temperatures, flow rates, pressures, delta pressures, tank quantities, and gas quantities in the air, along with air scrubbing component performance. PCTAP s solution process assumes that the tubes in the system are well insulated so that only the heat transfer between fluid and tube wall and between adjacent tubes is modeled. The system described in the model file is broken down into its individual components; i.e., tubes, cold plates, heat exchangers, etc. A solution vector is built from the components and a flow is then simulated with fluid being transferred from one component to the next. The solution vector of components in the model file is built at the initiation of the run. This solution vector is simply a list of components in the order of their inlet dependency on other components. The component parameters are updated in the order in which they appear in the list at every time step. Once the solution vectors have been determined, PCTAP cycles through the components in the solution vector, executing their outlet function for each time-step increment.
Analysis of Ion Currents Contribution to Repolarization in Human Heart Failure Using Computer Models
Energy Technology Data Exchange (ETDEWEB)
Marotta, F.; Paci, M.A.; Severi, S.; Trenor, B.
2016-07-01
The mechanisms underlying repolarization of the ventricular action potential (AP) are subject of research for anti-arrhythmic drugs. In fact, the prolongation of the AP occurs in several conditions of heart disease, such as heart failure, a major problem precursor for serious arrhythmias. In this study, we investigated the phenomena of repolarization reserve, defined as the capacity of the cell to repolarize in case of a functional loss, and the all-or-none repolarization, which depends on the delicate balance of inward and outward currents in the different phases of the AP, under conditions of human heart failure (HF). To simulate HF conditions, the O'Hara et al. human AP model was modified and specific protocols for all-or-none repolarization were applied. Our results show that in the early repolarization the threshold for all-or-none repolarization is not altered in HF even if a decrease in potassium currents can be observed. To quantify the contribution of the individual ion currents to HF induced AP prolongation, we used a novel piecewise-linear approximation approach proposed by Paci et al. In particular, INaL and ICaL are the main responsible for APD prolongation due to HF (85 and 35 ms respectively). Our results highlight this novel algorithm as a powerful tool to have a more complete picture of the complex ionic mechanisms underlying this disease and confirm the important role of the late sodium current in HF repolarization. (Author)
Computer Aided Modeling of Human Mastoid Cavity Biomechanics Using Finite Element Analysis
Directory of Open Access Journals (Sweden)
Chou Yuan-Fang
2010-01-01
Full Text Available The aim of the present study was to analyze the human mastoid cavity on sound transmission using finite element method. Pressure distributions in the external ear canal and middle ear cavity at different frequencies were demonstrated. Our results showed that, first, blocking the aditus improves middle ear sound transmission in the 1500- to 2500-Hz range and decreases displacement in frequencies below 1000 Hz when compared with the normal ear. Second, at frequencies lower than 1000 Hz, the acoustic pressures were almost uniformly distributed in the external ear canal and middle ear cavity. At high frequencies, higher than 1000 Hz, the pressure distribution varied along the external ear canal and middle ear cavity. Third, opening the aditus, the pressures difference in dB between the middle ear cavity and external ear canal were larger than those of the closed mastoid cavity in low frequency (<1000 Hz. Finally, there was no significant difference in the acoustic pressure between the oval window and round window is noted and increased by 5 dB by blocking the aditus. These results suggest that our complete FE model including the mastoid cavity is potentially useful and can provide more information in the study of middle ear biomechanics.
Vintila, Iuliana; Gavrus, Adinel
2017-10-01
The present research paper proposes the validation of a rigorous computation model used as a numerical tool to identify rheological behavior of complex emulsions W/O. Considering a three-dimensional description of a general viscoplastic flow it is detailed the thermo-mechanical equations used to identify fluid or soft material's rheological laws starting from global experimental measurements. Analyses are conducted for complex emulsions W/O having generally a Bingham behavior using the shear stress - strain rate dependency based on a power law and using an improved analytical model. Experimental results are investigated in case of rheological behavior for crude and refined rapeseed/soybean oils and four types of corresponding W/O emulsions using different physical-chemical composition. The rheological behavior model was correlated with the thermo-mechanical analysis of a plane-plane rheometer, oil content, chemical composition, particle size and emulsifier's concentration. The parameters of rheological laws describing the industrial oils and the W/O concentrated emulsions behavior were computed from estimated shear stresses using a non-linear regression technique and from experimental torques using the inverse analysis tool designed by A. Gavrus (1992-2000).
A computational description of simple mediation analysis
Directory of Open Access Journals (Sweden)
Caron, Pier-Olivier
2018-04-01
Full Text Available Simple mediation analysis is an increasingly popular statistical analysis in psychology and in other social sciences. However, there is very few detailed account of the computations within the model. Articles are more often focusing on explaining mediation analysis conceptually rather than mathematically. Thus, the purpose of the current paper is to introduce the computational modelling within simple mediation analysis accompanied with examples with R. Firstly, mediation analysis will be described. Then, the method to simulate data in R (with standardized coefficients will be presented. Finally, the bootstrap method, the Sobel test and the Baron and Kenny test all used to evaluate mediation (i.e., indirect effect will be developed. The R code to implement the computation presented is offered as well as a script to carry a power analysis and a complete example.
International Nuclear Information System (INIS)
Potter, J.M.
1985-01-01
The mathematical background for a multiport-network-solving program is described. A method for accurately numerically modeling an arbitrary, continuous, multiport transmission line is discussed. A modification to the transmission-line equations to accommodate multiple rf drives is presented. An improved model for the radio-frequency quadrupole (RFQ) accelerator that corrects previous errors is given. This model permits treating the RFQ as a true eight-port network for simplicity in interpreting the field distribution and ensures that all modes propagate at the same velocity in the high-frequency limit. The flexibility of the multiport model is illustrated by simple modifications to otherwise two-dimensional systems that permit modeling them as linear chains of multiport networks
Computer graphics in reactor safety analysis
International Nuclear Information System (INIS)
Fiala, C.; Kulak, R.F.
1989-01-01
This paper describes a family of three computer graphics codes designed to assist the analyst in three areas: the modelling of complex three-dimensional finite element models of reactor structures; the interpretation of computational results; and the reporting of the results of numerical simulations. The purpose and key features of each code are presented. The graphics output used in actual safety analysis are used to illustrate the capabilities of each code. 5 refs., 10 figs
Analysis of computer programming languages
International Nuclear Information System (INIS)
Risset, Claude Alain
1967-01-01
This research thesis aims at trying to identify some methods of syntax analysis which can be used for computer programming languages while putting aside computer devices which influence the choice of the programming language and methods of analysis and compilation. In a first part, the author proposes attempts of formalization of Chomsky grammar languages. In a second part, he studies analytical grammars, and then studies a compiler or analytic grammar for the Fortran language
Computational Modeling of Space Physiology
Lewandowski, Beth E.; Griffin, Devon W.
2016-01-01
The Digital Astronaut Project (DAP), within NASAs Human Research Program, develops and implements computational modeling for use in the mitigation of human health and performance risks associated with long duration spaceflight. Over the past decade, DAP developed models to provide insights into space flight related changes to the central nervous system, cardiovascular system and the musculoskeletal system. Examples of the models and their applications include biomechanical models applied to advanced exercise device development, bone fracture risk quantification for mission planning, accident investigation, bone health standards development, and occupant protection. The International Space Station (ISS), in its role as a testing ground for long duration spaceflight, has been an important platform for obtaining human spaceflight data. DAP has used preflight, in-flight and post-flight data from short and long duration astronauts for computational model development and validation. Examples include preflight and post-flight bone mineral density data, muscle cross-sectional area, and muscle strength measurements. Results from computational modeling supplement space physiology research by informing experimental design. Using these computational models, DAP personnel can easily identify both important factors associated with a phenomenon and areas where data are lacking. This presentation will provide examples of DAP computational models, the data used in model development and validation, and applications of the model.
International Nuclear Information System (INIS)
Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie; Mandelli, Diego; Smith, Curtis Lee
2015-01-01
The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS 'pathways,' or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with other experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.
Energy Technology Data Exchange (ETDEWEB)
Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States); Boring, Ronald Laurids [Idaho National Lab. (INL), Idaho Falls, ID (United States); Herberger, Sarah Elizabeth Marie [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-09-01
The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with other experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.
Hassan, Cesare; Pickhardt, Perry J; Pickhardt, Perry; Laghi, Andrea; Kim, Daniel H; Kim, Daniel; Zullo, Angelo; Iafrate, Franco; Di Giulio, Lorenzo; Morini, Sergio
2008-04-14
In addition to detecting colorectal neoplasia, abdominal computed tomography (CT) with colonography technique (CTC) can also detect unsuspected extracolonic cancers and abdominal aortic aneurysms (AAA).The efficacy and cost-effectiveness of this combined abdominal CT screening strategy are unknown. A computerized Markov model was constructed to simulate the occurrence of colorectal neoplasia, extracolonic malignant neoplasm, and AAA in a hypothetical cohort of 100,000 subjects from the United States who were 50 years of age. Simulated screening with CTC, using a 6-mm polyp size threshold for reporting, was compared with a competing model of optical colonoscopy (OC), both without and with abdominal ultrasonography for AAA detection (OC-US strategy). In the simulated population, CTC was the dominant screening strategy, gaining an additional 1458 and 462 life-years compared with the OC and OC-US strategies and being less costly, with a savings of $266 and $449 per person, respectively. The additional gains for CTC were largely due to a decrease in AAA-related deaths, whereas the modeled benefit from extracolonic cancer downstaging was a relatively minor factor. At sensitivity analysis, OC-US became more cost-effective only when the CTC sensitivity for large polyps dropped to 61% or when broad variations of costs were simulated, such as an increase in CTC cost from $814 to $1300 or a decrease in OC cost from $1100 to $500. With the OC-US approach, suboptimal compliance had a strong negative influence on efficacy and cost-effectiveness. The estimated mortality from CT-induced cancer was less than estimated colonoscopy-related mortality (8 vs 22 deaths), both of which were minor compared with the positive benefit from screening. When detection of extracolonic findings such as AAA and extracolonic cancer are considered in addition to colorectal neoplasia in our model simulation, CT colonography is a dominant screening strategy (ie, more clinically effective and more cost
Numerical Analysis of Multiscale Computations
Engquist, Björn; Tsai, Yen-Hsi R
2012-01-01
This book is a snapshot of current research in multiscale modeling, computations and applications. It covers fundamental mathematical theory, numerical algorithms as well as practical computational advice for analysing single and multiphysics models containing a variety of scales in time and space. Complex fluids, porous media flow and oscillatory dynamical systems are treated in some extra depth, as well as tools like analytical and numerical homogenization, and fast multipole method.
Uncertainty in biology a computational modeling approach
Gomez-Cabrero, David
2016-01-01
Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies. Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process. This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples. This book is intended for graduate stude...
Computational modelling in fluid mechanics
International Nuclear Information System (INIS)
Hauguel, A.
1985-01-01
The modelling of the greatest part of environmental or industrial flow problems gives very similar types of equations. The considerable increase in computing capacity over the last ten years consequently allowed numerical models of growing complexity to be processed. The varied group of computer codes presented are now a complementary tool of experimental facilities to achieve studies in the field of fluid mechanics. Several codes applied in the nuclear field (reactors, cooling towers, exchangers, plumes...) are presented among others [fr
Chaos Modelling with Computers
Indian Academy of Sciences (India)
Chaos is one of the major scientific discoveries of our times. In fact many scientists ... But there are other natural phenomena that are not predictable though ... characteristics of chaos. ... The position and velocity are all that are needed to determine the motion of a .... a system of equations that modelled the earth's weather ...
Patient-Specific Computational Modeling
Peña, Estefanía
2012-01-01
This book addresses patient-specific modeling. It integrates computational modeling, experimental procedures, imagine clinical segmentation and mesh generation with the finite element method (FEM) to solve problems in computational biomedicine and bioengineering. Specific areas of interest include cardiovascular problems, ocular and muscular systems and soft tissue modeling. Patient-specific modeling has been the subject of serious research over the last seven years and interest in the area is continually growing and this area is expected to further develop in the near future.
Computer model for ductile fracture
International Nuclear Information System (INIS)
Moran, B.; Reaugh, J. E.
1979-01-01
A computer model is described for predicting ductile fracture initiation and propagation. The computer fracture model is calibrated by simple and notched round-bar tension tests and a precracked compact tension test. The model is used to predict fracture initiation and propagation in a Charpy specimen and compare the results with experiments. The calibrated model provides a correlation between Charpy V-notch (CVN) fracture energy and any measure of fracture toughness, such as J/sub Ic/. A second simpler empirical correlation was obtained using the energy to initiate fracture in the Charpy specimen rather than total energy CVN, and compared the results with the empirical correlation of Rolfe and Novak
Trust Models in Ubiquitous Computing
DEFF Research Database (Denmark)
Nielsen, Mogens; Krukow, Karl; Sassone, Vladimiro
2008-01-01
We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.......We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models....
Introducing Seismic Tomography with Computational Modeling
Neves, R.; Neves, M. L.; Teodoro, V.
2011-12-01
Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.
International Nuclear Information System (INIS)
Scharfmann, E.; Silva, D.E. da
1981-01-01
The modifications on the phase separation model and heat tranfer model in Relap4/Mod 5 computer code, in order to make more realistic estimates of the core thermohydraulic behavior submitted to a loss of coolant accident. This research is directed to the accident analysis caused by small breaks in the primary circuits of PWR plants, where two-phase flow occurs most of the time. Calculation have been performed with the help of the original version of Relap code, as well as the version containing the proposed modifications on this work. Comparing one results with the original ones, we arrive at the conclusion that our results show more conservative values of core pressure and coolant temperature, while the peak values of fuel temperature are not exceeded. (Author) [pt
Visual and Computational Modelling of Minority Games
Directory of Open Access Journals (Sweden)
Robertas Damaševičius
2017-02-01
Full Text Available The paper analyses the Minority Game and focuses on analysis and computational modelling of several variants (variable payoff, coalition-based and ternary voting of Minority Game using UAREI (User-Action-Rule-Entities-Interface model. UAREI is a model for formal specification of software gamification, and the UAREI visual modelling language is a language used for graphical representation of game mechanics. The URAEI model also provides the embedded executable modelling framework to evaluate how the rules of the game will work for the players in practice. We demonstrate flexibility of UAREI model for modelling different variants of Minority Game rules for game design.
Sierra toolkit computational mesh conceptual model
International Nuclear Information System (INIS)
Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.
2010-01-01
The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.
Hansen, John A.; Barnett, Michael; MaKinster, James G.; Keating, Thomas
2004-01-01
In this study, we explore an alternate mode for teaching and learning the dynamic, three-dimensional (3D) relationships that are central to understanding astronomical concepts. To this end, we implemented an innovative undergraduate course in which we used inexpensive computer modeling tools. As the second of a two-paper series, this report…
Hariharan, Prasanna; Giarra, Matthew; Reddy, Varun; Day, Steven W; Manning, Keefe B; Deutsch, Steven; Stewart, Sandy F C; Myers, Matthew R; Berman, Michael R; Burgreen, Greg W; Paterson, Eric G; Malinauskas, Richard A
2011-04-01
This study is part of a FDA-sponsored project to evaluate the use and limitations of computational fluid dynamics (CFD) in assessing blood flow parameters related to medical device safety. In an interlaboratory study, fluid velocities and pressures were measured in a nozzle model to provide experimental validation for a companion round-robin CFD study. The simple benchmark nozzle model, which mimicked the flow fields in several medical devices, consisted of a gradual flow constriction, a narrow throat region, and a sudden expansion region where a fluid jet exited the center of the nozzle with recirculation zones near the model walls. Measurements of mean velocity and turbulent flow quantities were made in the benchmark device at three independent laboratories using particle image velocimetry (PIV). Flow measurements were performed over a range of nozzle throat Reynolds numbers (Re(throat)) from 500 to 6500, covering the laminar, transitional, and turbulent flow regimes. A standard operating procedure was developed for performing experiments under controlled temperature and flow conditions and for minimizing systematic errors during PIV image acquisition and processing. For laminar (Re(throat)=500) and turbulent flow conditions (Re(throat)≥3500), the velocities measured by the three laboratories were similar with an interlaboratory uncertainty of ∼10% at most of the locations. However, for the transitional flow case (Re(throat)=2000), the uncertainty in the size and the velocity of the jet at the nozzle exit increased to ∼60% and was very sensitive to the flow conditions. An error analysis showed that by minimizing the variability in the experimental parameters such as flow rate and fluid viscosity to less than 5% and by matching the inlet turbulence level between the laboratories, the uncertainties in the velocities of the transitional flow case could be reduced to ∼15%. The experimental procedure and flow results from this interlaboratory study (available
Affective Computing and Sentiment Analysis
Ahmad, Khurshid
2011-01-01
This volume maps the watershed areas between two 'holy grails' of computer science: the identification and interpretation of affect -- including sentiment and mood. The expression of sentiment and mood involves the use of metaphors, especially in emotive situations. Affect computing is rooted in hermeneutics, philosophy, political science and sociology, and is now a key area of research in computer science. The 24/7 news sites and blogs facilitate the expression and shaping of opinion locally and globally. Sentiment analysis, based on text and data mining, is being used in the looking at news
Trust models in ubiquitous computing.
Krukow, Karl; Nielsen, Mogens; Sassone, Vladimiro
2008-10-28
We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.
Ch. 33 Modeling: Computational Thermodynamics
International Nuclear Information System (INIS)
Besmann, Theodore M.
2012-01-01
This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:
Haimes, Robert; Follen, Gregory J.
1998-01-01
CAPRI is a CAD-vendor neutral application programming interface designed for the construction of analysis and design systems. By allowing access to the geometry from within all modules (grid generators, solvers and post-processors) such tasks as meshing on the actual surfaces, node enrichment by solvers and defining which mesh faces are boundaries (for the solver and visualization system) become simpler. The overall reliance on file 'standards' is minimized. This 'Geometry Centric' approach makes multi-physics (multi-disciplinary) analysis codes much easier to build. By using the shared (coupled) surface as the foundation, CAPRI provides a single call to interpolate grid-node based data from the surface discretization in one volume to another. Finally, design systems are possible where the results can be brought back into the CAD system (and therefore manufactured) because all geometry construction and modification are performed using the CAD system's geometry kernel.
Directory of Open Access Journals (Sweden)
Yoshinobu Tamura
2015-06-01
Full Text Available At present, many cloud services are managed by using open source software, such as OpenStack and Eucalyptus, because of the unification management of data, cost reduction, quick delivery and work savings. The operation phase of cloud computing has a unique feature, such as the provisioning processes, the network-based operation and the diversity of data, because the operation phase of cloud computing changes depending on many external factors. We propose a jump diffusion model with two-dimensional Wiener processes in order to consider the interesting aspects of the network traffic and big data on cloud computing. In particular, we assess the stability of cloud software by using the sample paths obtained from the jump diffusion model with two-dimensional Wiener processes. Moreover, we discuss the optimal maintenance problem based on the proposed jump diffusion model. Furthermore, we analyze actual data to show numerical examples of dependability optimization based on the software maintenance cost considering big data on cloud computing.
Energy Technology Data Exchange (ETDEWEB)
Saffer, Shelley (Sam) I.
2014-12-01
This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.
Time series modeling, computation, and inference
Prado, Raquel
2010-01-01
The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit
Computer Modelling of Dynamic Processes
Directory of Open Access Journals (Sweden)
B. Rybakin
2000-10-01
Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.
Computational models of complex systems
Dabbaghian, Vahid
2014-01-01
Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...
Koszela, K.; OtrzÄ sek, J.; Zaborowicz, M.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.
2014-04-01
The farming area for vegetables in Poland is constantly changed and modified. Each year the cultivation structure of particular vegetables is different. However, it is the cultivation of carrots that plays a significant role among vegetables. According to the Main Statistical Office (GUS), in 2012 carrot held second position among the cultivated root vegetables, and it was estimated at 835 thousand tons. In the world we are perceived as the leading producer of carrot, due to the fourth place in the ranking of global producers. Poland is the largest producer of this vegetable in the EU [1]. It is also noteworthy, that the demand for dried vegetables is still increasing. This tendency affects the development of drying industry in our country, contributing to utilization of the product surplus. Dried vegetables are used increasingly often in various sectors of food products industry, due to high nutrition value, as well as to changing alimentary preferences of consumers [2-3]. Dried carrot plays a crucial role among dried vegetables, because of its wide scope of use and high nutrition value. It contains a lot of carotene and sugar present in the form of crystals. Carrot also undergoes many different drying processes, which makes it difficult to perform a reliable quality assessment and classification of this dried material. One of many qualitative properties of dried carrot, having important influence on a positive or negative result of the quality assessment, is color and shape. The aim of the research project was to develop a method for the analysis of microwave-vacuum dried carrot images, and its application for the classification of individual fractions in the sample studied for quality assessment. During the research digital photographs of dried carrot were taken, which constituted the basis for assessment performed by a dedicated computer programme developed as a part of the research. Consequently, using a neural model, the dried material was classified [4-6].
Computer codes for safety analysis
International Nuclear Information System (INIS)
Holland, D.F.
1986-11-01
Computer codes for fusion safety analysis have been under development in the United States for about a decade. This paper will discuss five codes that are currently under development by the Fusion Safety Program. The purpose and capability of each code will be presented, a sample given, followed by a discussion of the present status and future development plans
Computational algebraic geometry of epidemic models
Rodríguez Vega, Martín.
2014-06-01
Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.
Getting computer models to communicate
International Nuclear Information System (INIS)
Caremoli, Ch.; Erhard, P.
1999-01-01
Today's computers have the processing power to deliver detailed and global simulations of complex industrial processes such as the operation of a nuclear reactor core. So should we be producing new, global numerical models to take full advantage of this new-found power? If so, it would be a long-term job. There is, however, another solution; to couple the existing validated numerical models together so that they work as one. (authors)
Technical Note: SPEKTR 3.0—A computational tool for x-ray spectrum modeling and analysis
Energy Technology Data Exchange (ETDEWEB)
Punnoose, J.; Xu, J.; Sisniega, A.; Zbijewski, W.; Siewerdsen, J. H., E-mail: jeff.siewerdsen@jhu.edu [Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 (United States)
2016-08-15
Purpose: A computational toolkit (SPEKTR 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a MATLAB (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. Methods: The SPEKTR code generates x-ray spectra (photons/mm{sup 2}/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20–150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. Results: The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30–140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. Conclusions: The computational toolkit, SPEKTR, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the SPEKTR function library, UI, and optimization tool are available.
Technical Note: spektr 3.0—A computational tool for x-ray spectrum modeling and analysis
Punnoose, J.; Xu, J.; Sisniega, A.; Zbijewski, W.; Siewerdsen, J. H.
2016-01-01
Purpose: A computational toolkit (spektr 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a matlab (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. Methods: The spektr code generates x-ray spectra (photons/mm2/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20–150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. Results: The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30–140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. Conclusions: The computational toolkit, spektr, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the spektr function library, UI, and optimization tool are available. PMID:27487888
Computational Modeling in Liver Surgery
Directory of Open Access Journals (Sweden)
Bruno Christ
2017-11-01
Full Text Available The need for extended liver resection is increasing due to the growing incidence of liver tumors in aging societies. Individualized surgical planning is the key for identifying the optimal resection strategy and to minimize the risk of postoperative liver failure and tumor recurrence. Current computational tools provide virtual planning of liver resection by taking into account the spatial relationship between the tumor and the hepatic vascular trees, as well as the size of the future liver remnant. However, size and function of the liver are not necessarily equivalent. Hence, determining the future liver volume might misestimate the future liver function, especially in cases of hepatic comorbidities such as hepatic steatosis. A systems medicine approach could be applied, including biological, medical, and surgical aspects, by integrating all available anatomical and functional information of the individual patient. Such an approach holds promise for better prediction of postoperative liver function and hence improved risk assessment. This review provides an overview of mathematical models related to the liver and its function and explores their potential relevance for computational liver surgery. We first summarize key facts of hepatic anatomy, physiology, and pathology relevant for hepatic surgery, followed by a description of the computational tools currently used in liver surgical planning. Then we present selected state-of-the-art computational liver models potentially useful to support liver surgery. Finally, we discuss the main challenges that will need to be addressed when developing advanced computational planning tools in the context of liver surgery.
International Nuclear Information System (INIS)
Ko, Jong-Hwan.
1993-01-01
Firstly, this study investigaties the causes of sectoral growth and structural changes in the Korean economy. Secondly, it develops the borders of a consistent economic model in order to investigate simultaneously the different impacts of changes in energy and in the domestic economy. This is done any both the Input-Output-Decomposition analysis and a Computable General Equilibrium model (CGE Model). The CGE Model eliminates the disadvantages of the IO Model and allows the investigation of the interdegenerative of the various energy sectors with the economy. The Social Accounting Matrix serves as the data basis of the GCE Model. Simulated experiments have been comet out with the help of the GCE Model, indicating the likely impact of an oil price shock in the economy-sectorally and generally. (orig.) [de
Temporal fringe pattern analysis with parallel computing
International Nuclear Information System (INIS)
Tuck Wah Ng; Kar Tien Ang; Argentini, Gianluca
2005-01-01
Temporal fringe pattern analysis is invaluable in transient phenomena studies but necessitates long processing times. Here we describe a parallel computing strategy based on the single-program multiple-data model and hyperthreading processor technology to reduce the execution time. In a two-node cluster workstation configuration we found that execution periods were reduced by 1.6 times when four virtual processors were used. To allow even lower execution times with an increasing number of processors, the time allocated for data transfer, data read, and waiting should be minimized. Parallel computing is found here to present a feasible approach to reduce execution times in temporal fringe pattern analysis
Systems analysis and the computer
Energy Technology Data Exchange (ETDEWEB)
Douglas, A S
1983-08-01
The words systems analysis are used in at least two senses. Whilst the general nature of the topic is well understood in the or community, the nature of the term as used by computer scientists is less familiar. In this paper, the nature of systems analysis as it relates to computer-based systems is examined from the point of view that the computer system is an automaton embedded in a human system, and some facets of this are explored. It is concluded that or analysts and computer analysts have things to learn from each other and that this ought to be reflected in their education. The important role played by change in the design of systems is also highlighted, and it is concluded that, whilst the application of techniques developed in the artificial intelligence field have considerable relevance to constructing automata able to adapt to change in the environment, study of the human factors affecting the overall systems within which the automata are embedded has an even more important role. 19 references.
Directory of Open Access Journals (Sweden)
Daniel Tik-Pui Fong
2012-10-01
Full Text Available Lateral ankle sprains continue to be the most common injury sustained by athletes and create an annual healthcare burden of over $4 billion in the U.S. alone. Foot inversion is suspected in these cases, but the mechanism of injury remains unclear. While kinematics and kinetics data are crucial in understanding the injury mechanisms, ligament behaviour measures – such as ligament strains – are viewed as the potential causal factors of ankle sprains. This review article demonstrates a novel methodology that integrates model matching video analyses with computational simulations in order to investigate injury-producing events for a better understanding of such injury mechanisms. In particular, ankle joint kinematics from actual injury incidents were deduced by model matching video analyses and then input into a generic computational model based on rigid bone surfaces and deformable ligaments of the ankle so as to investigate the ligament strains that accompany these sprain injuries. These techniques may have the potential for guiding ankle sprain prevention strategies and targeted rehabilitation therapies.
Du, Fengzhou; Li, Binghang; Yin, Ningbei; Cao, Yilin; Wang, Yongqian
2017-03-01
Knowing the volume of a graft is essential in repairing alveolar bone defects. This study investigates the 2 advanced preoperative volume measurement methods: three-dimensional (3D) printing and computer-aided engineering (CAE). Ten unilateral alveolar cleft patients were enrolled in this study. Their computed tomographic data were sent to 3D printing and CAE software. A simulated graft was used on the 3D-printed model, and the graft volume was measured by water displacement. The volume calculated by CAE software used mirror-reverses technique. The authors compared the actual volumes of the simulated grafts with the CAE software-derived volumes. The average volume of the simulated bone grafts by 3D-printed models was 1.52 mL, higher than the mean volume of 1.47 calculated by CAE software. The difference between the 2 volumes was from -0.18 to 0.42 mL. The paired Student t test showed no statistically significant difference between the volumes derived from the 2 methods. This study demonstrated that the mirror-reversed technique by CAE software is as accurate as the simulated operation on 3D-printed models in unilateral alveolar cleft patients. These findings further validate the use of 3D printing and CAE technique in alveolar defect repairing.
Computer aided safety analysis 1989
International Nuclear Information System (INIS)
1990-04-01
The meeting was conducted in a workshop style, to encourage involvement of all participants during the discussions. Forty-five (45) experts from 19 countries, plus 22 experts from the GDR participated in the meeting. A list of participants can be found at the end of this volume. Forty-two (42) papers were presented and discussed during the meeting. Additionally an open discussion was held on the possible directions of the IAEA programme on Computer Aided Safety Analysis. A summary of the conclusions of these discussions is presented in the publication. The remainder of this proceedings volume comprises the transcript of selected technical papers (22) presented in the meeting. It is the intention of the IAEA that the publication of these proceedings will extend the benefits of the discussions held during the meeting to a larger audience throughout the world. The Technical Committee/Workshop on Computer Aided Safety Analysis was organized by the IAEA in cooperation with the National Board for Safety and Radiological Protection (SAAS) of the German Democratic Republic in Berlin. The purpose of the meeting was to provide an opportunity for discussions on experiences in the use of computer codes used for safety analysis of nuclear power plants. In particular it was intended to provide a forum for exchange of information among experts using computer codes for safety analysis under the Technical Cooperation Programme on Safety of WWER Type Reactors (RER/9/004) and other experts throughout the world. A separate abstract was prepared for each of the 22 selected papers. Refs, figs tabs and pictures
Directory of Open Access Journals (Sweden)
Benn eMacdonald
2015-11-01
Full Text Available Parameter inference in mathematical models of biological pathways, expressed as coupled ordinary differential equations (ODEs, is a challenging problem in contemporary systems biology. Conventional methods involve repeatedly solving the ODEs by numerical integration, which is computationally onerous and does not scale up to complex systems. Aimed at reducing the computational costs, new concepts based on gradient matching have recently been proposed in the computational statistics and machine learning literature. In a preliminary smoothing step, the time series data are interpolated; then, in a second step, the parameters of the ODEs are optimised so as to minimise some metric measuring the difference between the slopes of the tangents to the interpolants, and the time derivatives from the ODEs. In this way, the ODEs never have to be solved explicitly. This review provides a concise methodological overview of the current state-of-the-art methods for gradient matching in ODEs, followed by an empirical comparative evaluation based on a set of widely used and representative benchmark data.
Minimal models of multidimensional computations.
Directory of Open Access Journals (Sweden)
Jeffrey D Fitzgerald
2011-03-01
Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.
PETRI NET MODELING OF COMPUTER VIRUS LIFE CYCLE
African Journals Online (AJOL)
Dr Obe
dynamic system analysis is applied to model the virus life cycle. Simulation of the derived model ... Keywords: Virus lifecycle, Petri nets, modeling. simulation. .... complex process. Figure 2 .... by creating Matlab files for five different computer ...
Computational Models of Rock Failure
May, Dave A.; Spiegelman, Marc
2017-04-01
Practitioners in computational geodynamics, as per many other branches of applied science, typically do not analyse the underlying PDE's being solved in order to establish the existence or uniqueness of solutions. Rather, such proofs are left to the mathematicians, and all too frequently these results lag far behind (in time) the applied research being conducted, are often unintelligible to the non-specialist, are buried in journals applied scientists simply do not read, or simply have not been proven. As practitioners, we are by definition pragmatic. Thus, rather than first analysing our PDE's, we first attempt to find approximate solutions by throwing all our computational methods and machinery at the given problem and hoping for the best. Typically this approach leads to a satisfactory outcome. Usually it is only if the numerical solutions "look odd" that we start delving deeper into the math. In this presentation I summarise our findings in relation to using pressure dependent (Drucker-Prager type) flow laws in a simplified model of continental extension in which the material is assumed to be an incompressible, highly viscous fluid. Such assumptions represent the current mainstream adopted in computational studies of mantle and lithosphere deformation within our community. In short, we conclude that for the parameter range of cohesion and friction angle relevant to studying rocks, the incompressibility constraint combined with a Drucker-Prager flow law can result in problems which have no solution. This is proven by a 1D analytic model and convincingly demonstrated by 2D numerical simulations. To date, we do not have a robust "fix" for this fundamental problem. The intent of this submission is to highlight the importance of simple analytic models, highlight some of the dangers / risks of interpreting numerical solutions without understanding the properties of the PDE we solved, and lastly to stimulate discussions to develop an improved computational model of
International Nuclear Information System (INIS)
Stempniewicz, M.; Marks, P.; Salwa, K.
1992-06-01
TASAC (Thermal Analysis of Severe Accident Conditions) is computer code developed in the Institute of Atomic Energy written in FORTRAN 77 for the digital computer analysis of PWR rod bundle behaviour during severe accident conditions. The code has the ability to model an early stage of core degradation including heat transfer inside the rods, convective and radiative heat exchange as well as cladding interactions with coolant and fuel, hydrogen generation, melting, relocations and refreezing of fuel rod materials with dissolution of UO 2 and ZrO 2 in liquid phase. The code was applied for the simulation of International Standard Problem number 28, performed on PHEBUS test facility. This report contains the program physical models description, detailed description of input data requirements and results of code verification. The main directions for future TASAC code development are formulated. (author). 20 refs, 39 figs, 4 tabs
Energy Technology Data Exchange (ETDEWEB)
Stempniewicz, M; Marks, P; Salwa, K
1992-06-01
TASAC (Thermal Analysis of Severe Accident Conditions) is computer code developed in the Institute of Atomic Energy written in FORTRAN 77 for the digital computer analysis of PWR rod bundle behaviour during severe accident conditions. The code has the ability to model an early stage of core degradation including heat transfer inside the rods, convective and radiative heat exchange as well as cladding interactions with coolant and fuel, hydrogen generation, melting, relocations and refreezing of fuel rod materials with dissolution of UO{sub 2} and ZrO{sub 2} in liquid phase. The code was applied for the simulation of International Standard Problem number 28, performed on PHEBUS test facility. This report contains the program physical models description, detailed description of input data requirements and results of code verification. The main directions for future TASAC code development are formulated. (author). 20 refs, 39 figs, 4 tabs.
L?nnberg, Tapio; Svensson, Valentine; James, Kylie R.; Fernandez-Ruiz, Daniel; Sebina, Ismail; Montandon, Ruddy; Soon, Megan S. F.; Fogg, Lily G.; Nair, Arya Sheela; Liligeto, Urijah; Stubbington, Michael J. T.; Ly, Lam-Ha; Bagger, Frederik Otzen; Zwiessele, Max; Lawrence, Neil D.
2017-01-01
Differentiation of na?ve CD4+ T cells into functionally distinct T helper subsets is crucial for the orchestration of immune responses. Due to extensive heterogeneity and multiple overlapping transcriptional programs in differentiating T cell populations, this process has remained a challenge for systematic dissection in vivo. By using single-cell transcriptomics and computational analysis using a temporal mixtures of Gaussian processes model, termed GPfates, we reconstructed the developmenta...
Uncertainty analysis in Monte Carlo criticality computations
International Nuclear Information System (INIS)
Qi Ao
2011-01-01
Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.
Computational analysis of a multistage axial compressor
Mamidoju, Chaithanya
Turbomachines are used extensively in Aerospace, Power Generation, and Oil & Gas Industries. Efficiency of these machines is often an important factor and has led to the continuous effort to improve the design to achieve better efficiency. The axial flow compressor is a major component in a gas turbine with the turbine's overall performance depending strongly on compressor performance. Traditional analysis of axial compressors involves throughflow calculations, isolated blade passage analysis, Quasi-3D blade-to-blade analysis, single-stage (rotor-stator) analysis, and multi-stage analysis involving larger design cycles. In the current study, the detailed flow through a 15 stage axial compressor is analyzed using a 3-D Navier Stokes CFD solver in a parallel computing environment. Methodology is described for steady state (frozen rotor stator) analysis of one blade passage per component. Various effects such as mesh type and density, boundary conditions, tip clearance and numerical issues such as turbulence model choice, advection model choice, and parallel processing performance are analyzed. A high sensitivity of the predictions to the above was found. Physical explanation to the flow features observed in the computational study are given. The total pressure rise verses mass flow rate was computed.
Computational analysis of cerebral cortex
Energy Technology Data Exchange (ETDEWEB)
Takao, Hidemasa; Abe, Osamu; Ohtomo, Kuni [University of Tokyo, Department of Radiology, Graduate School of Medicine, Tokyo (Japan)
2010-08-15
Magnetic resonance imaging (MRI) has been used in many in vivo anatomical studies of the brain. Computational neuroanatomy is an expanding field of research, and a number of automated, unbiased, objective techniques have been developed to characterize structural changes in the brain using structural MRI without the need for time-consuming manual measurements. Voxel-based morphometry is one of the most widely used automated techniques to examine patterns of brain changes. Cortical thickness analysis is also becoming increasingly used as a tool for the study of cortical anatomy. Both techniques can be relatively easily used with freely available software packages. MRI data quality is important in order for the processed data to be accurate. In this review, we describe MRI data acquisition and preprocessing for morphometric analysis of the brain and present a brief summary of voxel-based morphometry and cortical thickness analysis. (orig.)
Computational analysis of cerebral cortex
International Nuclear Information System (INIS)
Takao, Hidemasa; Abe, Osamu; Ohtomo, Kuni
2010-01-01
Magnetic resonance imaging (MRI) has been used in many in vivo anatomical studies of the brain. Computational neuroanatomy is an expanding field of research, and a number of automated, unbiased, objective techniques have been developed to characterize structural changes in the brain using structural MRI without the need for time-consuming manual measurements. Voxel-based morphometry is one of the most widely used automated techniques to examine patterns of brain changes. Cortical thickness analysis is also becoming increasingly used as a tool for the study of cortical anatomy. Both techniques can be relatively easily used with freely available software packages. MRI data quality is important in order for the processed data to be accurate. In this review, we describe MRI data acquisition and preprocessing for morphometric analysis of the brain and present a brief summary of voxel-based morphometry and cortical thickness analysis. (orig.)
Business model elements impacting cloud computing adoption
DEFF Research Database (Denmark)
Bogataj, Kristina; Pucihar, Andreja; Sudzina, Frantisek
The paper presents a proposed research framework for identification of business model elements impacting Cloud Computing Adoption. We provide a definition of main Cloud Computing characteristics, discuss previous findings on factors impacting Cloud Computing Adoption, and investigate technology a...
Description of mathematical models and computer programs
International Nuclear Information System (INIS)
1977-01-01
The paper gives a description of mathematical models and computer programs for analysing possible strategies for spent fuel management, with emphasis on economic analysis. The computer programs developed, describe the material flows, facility construction schedules, capital investment schedules and operating costs for the facilities used in managing the spent fuel. The computer programs use a combination of simulation and optimization procedures for the economic analyses. Many of the fuel cycle steps (such as spent fuel discharges, storage at the reactor, and transport to the RFCC) are described in physical and economic terms through simulation modeling, while others (such as reprocessing plant size and commissioning schedules, interim storage facility commissioning schedules etc.) are subjected to economic optimization procedures to determine the approximate lowest-cost plans from among the available feasible alternatives
Directory of Open Access Journals (Sweden)
Kazemi P
2017-01-01
Full Text Available Pezhman Kazemi,1 Mohammad Hassan Khalid,1 Ana Pérez Gago,2 Peter Kleinebudde,2 Renata Jachowicz,1 Jakub Szlęk,1 Aleksander Mendyk1 1Department of Pharmaceutical Technology and Biopharmaceutics, Faculty of Pharmacy, Jagiellonian University Medical College, Krakow, Poland; 2Institute of Pharmaceutics and Biopharmaceutics, Heinrich-Heine-University, Düsseldorf, Germany Abstract: Dry granulation using roll compaction is a typical unit operation for producing solid dosage forms in the pharmaceutical industry. Dry granulation is commonly used if the powder mixture is sensitive to heat and moisture and has poor flow properties. The output of roll compaction is compacted ribbons that exhibit different properties based on the adjusted process parameters. These ribbons are then milled into granules and finally compressed into tablets. The properties of the ribbons directly affect the granule size distribution (GSD and the quality of final products; thus, it is imperative to study the effect of roll compaction process parameters on GSD. The understanding of how the roll compactor process parameters and material properties interact with each other will allow accurate control of the process, leading to the implementation of quality by design practices. Computational intelligence (CI methods have a great potential for being used within the scope of quality by design approach. The main objective of this study was to show how the computational intelligence techniques can be useful to predict the GSD by using different process conditions of roll compaction and material properties. Different techniques such as multiple linear regression, artificial neural networks, random forest, Cubist and k-nearest neighbors algorithm assisted by sevenfold cross-validation were used to present generalized models for the prediction of GSD based on roll compaction process setting and material properties. The normalized root-mean-squared error and the coefficient of
Computational Modeling in Tissue Engineering
2013-01-01
One of the major challenges in tissue engineering is the translation of biological knowledge on complex cell and tissue behavior into a predictive and robust engineering process. Mastering this complexity is an essential step towards clinical applications of tissue engineering. This volume discusses computational modeling tools that allow studying the biological complexity in a more quantitative way. More specifically, computational tools can help in: (i) quantifying and optimizing the tissue engineering product, e.g. by adapting scaffold design to optimize micro-environmental signals or by adapting selection criteria to improve homogeneity of the selected cell population; (ii) quantifying and optimizing the tissue engineering process, e.g. by adapting bioreactor design to improve quality and quantity of the final product; and (iii) assessing the influence of the in vivo environment on the behavior of the tissue engineering product, e.g. by investigating vascular ingrowth. The book presents examples of each...
Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation
Stocker, John C.; Golomb, Andrew M.
2011-01-01
Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.
International Nuclear Information System (INIS)
García González, M.; García Jiménez, P.; Martínez Domínguez, F.
2016-01-01
To carry out an analysis of the distribution of gases within the containment building at the CN Almaraz site, a simulation model with the thermohydraulic GOTHIC [1] code has been used. This has been assessed with a gas control system based on passive autocatalytic recombiners (PARs). The model is used to test the effectiveness of the control systems for gases to be used in the Almaraz Nuclear Power Plant, Uits I&II (Caceres, Spain, 1,035 MW and 1,044 MW). The model must confirm the location and number of the recombiners proposed to be installed. It is an essential function of the gas control system to avoid any formation of explosive atmospheres by reducing and limiting the concentration of combustible gases during an accident, thus maintaining the integrity of the containment. The model considers severe accident scenarios with specific conditions that produce the most onerous generation of combustible gases.
Computer modeling of commercial refrigerated warehouse facilities
International Nuclear Information System (INIS)
Nicoulin, C.V.; Jacobs, P.C.; Tory, S.
1997-01-01
The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented
Directory of Open Access Journals (Sweden)
Thomas eJahans-Price
2014-03-01
Full Text Available We introduce a computational model describing rat behaviour and the interactions of neural populations processing spatial and mnemonic information during a maze-based, decision-making task. The model integrates sensory input and implements a working memory to inform decisions at a choice point, reproducing rat behavioural data and predicting the occurrence of turn- and memory-dependent activity in neuronal networks supporting task performance. We tested these model predictions using a new software toolbox (Maze Query Language, MQL to analyse activity of medial prefrontal cortical (mPFC and dorsal hippocampal (dCA1 neurons recorded from 6 adult rats during task performance. The firing rates of dCA1 neurons discriminated context (i.e. the direction of the previous turn, whilst a subset of mPFC neurons was selective for current turn direction or context, with some conjunctively encoding both. mPFC turn-selective neurons displayed a ramping of activity on approach to the decision turn and turn-selectivity in mPFC was significantly reduced during error trials. These analyses complement data from neurophysiological recordings in non-human primates indicating that firing rates of cortical neurons correlate with integration of sensory evidence used to inform decision-making.
International Nuclear Information System (INIS)
Mohd Jamil Hashim; Mohd Abdul Wahab Yusof; Mohd Raihan Taha
2007-01-01
The VHELP 2.2v computer program is a landfill modeling to study the performance of varies layer in the radioactive repository or landfill. The water balance for the whole repository will be presented in hydrologic parameters such as hydraulic conductivity, runoff, rainfall, surcharge, percolation and evapotranspiration . This study includes the selection and laboratory testing of material density, porosity, void ratio and moisture in achieving the required hydraulic conductivity in gaining water balance. Hence the integrity of the layer will be predicted through out its life span limited to 100 years. This modeling allows us to formulate better compaction method deriving suitable Compacted Soil Liner to control cracks, bath-tub effects, leach-ate discharge and repository stability. The lysimeter samplings and double ring infiltrometer were used in obtaining the actual hydraulic conductivity. This parameter gives modeling input better understanding of the water infiltration and provides better repository profile design to gain water balance. These studies are the first attempt to examine the radioactive repository design profile in containing and surcharge outflow to the ground water. Therefore the acquired knowledge will be beneficial for the construction of the up coming national repository and all existing municipal landfill design. (Author)
Pakdel, Amir R; Whyne, Cari M; Fialkov, Jeffrey A
2017-06-01
The trend towards optimizing stabilization of the craniomaxillofacial skeleton (CMFS) with the minimum amount of fixation required to achieve union, and away from maximizing rigidity, requires a quantitative understanding of craniomaxillofacial biomechanics. This study uses computational modeling to quantify the structural biomechanics of the CMFS under maximal physiologic masticatory loading. Using an experimentally validated subject-specific finite element (FE) model of the CMFS, the patterns of stress and strain distribution as a result of physiological masticatory loading were calculated. The trajectories of the stresses were plotted to delineate compressive and tensile regimes over the entire CMFS volume. The lateral maxilla was found to be the primary vertical buttress under maximal bite force loading, with much smaller involvement of the naso-maxillary buttress. There was no evidence that the pterygo-maxillary region is a buttressing structure, counter to classical buttress theory. The stresses at the zygomatic sutures suggest that two-point fixation of zygomatic complex fractures may be sufficient for fixation under bite force loading. The current experimentally validated biomechanical FE model of the CMFS is a practical tool for in silico optimization of current practice techniques and may be used as a foundation for the development of design criteria for future technologies for the treatment of CMFS injury and disease. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Kameswara Rao, P. V.; Rawal, Amit; Kumar, Vijay; Rajput, Krishn Gopal
2017-10-01
Absorptive glass mat (AGM) separators play a key role in enhancing the cycle life of the valve regulated lead acid (VRLA) batteries by maintaining the elastic characteristics under a defined level of compression force with the plates of the electrodes. Inevitably, there are inherent challenges to maintain the required level of compression characteristics of AGM separators during the charge and discharge of the battery. Herein, we report a three-dimensional (3D) analytical model for predicting the compression-recovery behavior of AGM separators by formulating a direct relationship with the constituent fiber and structural parameters. The analytical model of compression-recovery behavior of AGM separators has successfully included the fiber slippage criterion and internal friction losses. The presented work uses, for the first time, 3D data of fiber orientation from X-ray micro-computed tomography, for predicting the compression-recovery behavior of AGM separators. A comparison has been made between the theoretical and experimental results of compression-recovery behavior of AGM samples with defined fiber orientation characteristics. In general, the theory agreed reasonably well with the experimental results of AGM samples in both dry and wet states. Through theoretical modeling, fiber volume fraction was established as one of the key structural parameters that modulates the compression hysteresis of an AGM separator.
Can cloud computing benefit health services? - a SWOT analysis.
Kuo, Mu-Hsing; Kushniruk, Andre; Borycki, Elizabeth
2011-01-01
In this paper, we discuss cloud computing, the current state of cloud computing in healthcare, and the challenges and opportunities of adopting cloud computing in healthcare. A Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis was used to evaluate the feasibility of adopting this computing model in healthcare. The paper concludes that cloud computing could have huge benefits for healthcare but there are a number of issues that will need to be addressed before its widespread use in healthcare.
Kassem, M.; Soize, C.; Gagliardini, L.
2009-06-01
In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.
Opportunity for Realizing Ideal Computing System using Cloud Computing Model
Sreeramana Aithal; Vaikunth Pai T
2017-01-01
An ideal computing system is a computing system with ideal characteristics. The major components and their performance characteristics of such hypothetical system can be studied as a model with predicted input, output, system and environmental characteristics using the identified objectives of computing which can be used in any platform, any type of computing system, and for application automation, without making modifications in the form of structure, hardware, and software coding by an exte...
A model-based and computer-aided approach to analysis of human errors in nuclear power plants
International Nuclear Information System (INIS)
Yoon, Wan C.; Lee, Yong H.; Kim, Young S.
1996-01-01
Since the operator's mission in NPPs is increasingly defined by cognitive tasks such as monitoring, diagnosis and planning, the focus of human error analysis should also move from external actions to internal decision-making processes. While more elaborate analysis of cognitive aspects of human errors will help understand their causes and derive effective countermeasures, a lack of framework and an arbitrary resolution of description may hamper the effectiveness of such analysis. This paper presents new model-based schemes of event description and error classification as well as an interactive computerized support system. The schemes and the support system were produced in an effort to develop an improved version of HPES. The use of a decision-making model enables the analyst to document cognitive aspects of human performance explicitly and in a proper resolution. The stage-specific terms used in the proposed schemes make the task of characterizing human errors easier and confident for field analysts. The support system was designed to help the analyst achieve a contextually well-integrated analysis throughout the different parts of HPES
International Nuclear Information System (INIS)
Markowska, O.; Gardzinska, A.; Miechowicz, S.; Chrzan, R.; Urbanik, A.; Miechowicz, S.
2009-01-01
Background: The accuracy considerations of alignment of skull bone loss and artificial model of implant are presented. In standard surgical treatment the application of prefabricated alloplastic implants requires complicated procedures during surgery, especially additional geometry processing to provide better adjustment of implant. Rapid Prototyping can be used as an effective tool to generate complex 3D medical models and to improve and simplify surgical treatment planning. The operation time can also be significantly reduced. The aim of the study is adjustment accuracy analysis by measurements of fissure between the bone loss and implant. Material/Methods: The 3D numerical model was obtained from CT imaging with Siemens Sensation 10 CT scanner. The physical models were fabricated with 3DP Rapid Prototyping technology. The measurements were performed in determined points of the bone loss and implant borders. Results: Maximal width of fissure between bone loss and implant was 1.8 mm and minimal 0 mm. Average width was 0.714 mm, standard deviation 0.663 mm. Conclusions: Accuracy of 3DP technique is enough to create medical models in selected field of medicine. Models created using RP methods may be then used to produce implants of biocompatible material, for example by vacuum casting. Using of method suggested may allow shortening of presurgery and surgery time. (authors)
Energy Technology Data Exchange (ETDEWEB)
Brown, D L
2009-05-01
Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex
International Nuclear Information System (INIS)
Brown, D.L.
2009-01-01
Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems
Directory of Open Access Journals (Sweden)
Tian Wu
2014-11-01
Full Text Available This paper presents a model for the projection of Chinese vehicle stocks and road vehicle energy demand through 2050 based on low-, medium-, and high-growth scenarios. To derive a gross-domestic product (GDP-dependent Gompertz function, Chinese GDP is estimated using a recursive dynamic Computable General Equilibrium (CGE model. The Gompertz function is estimated using historical data on vehicle development trends in North America, Pacific Rim and Europe to overcome the problem of insufficient long-running data on Chinese vehicle ownership. Results indicate that the number of projected vehicle stocks for 2050 is 300, 455 and 463 million for low-, medium-, and high-growth scenarios respectively. Furthermore, the growth in China’s vehicle stock will increase beyond the inflection point of Gompertz curve by 2020, but will not reach saturation point during the period 2014–2050. Of major road vehicle categories, cars are the largest energy consumers, followed by trucks and buses. Growth in Chinese vehicle demand is primarily determined by per capita GDP. Vehicle saturation levels solely influence the shape of the Gompertz curve and population growth weakly affects vehicle demand. Projected total energy consumption of road vehicles in 2050 is 380, 575 and 586 million tonnes of oil equivalent for each scenario.
Liu, Huolong; Li, Mingzhong
2014-11-20
In this work a two-compartmental population balance model (TCPBM) was proposed to model a pulsed top-spray fluidized bed granulation. The proposed TCPBM considered the spatially heterogeneous granulation mechanisms of the granule growth by dividing the granulator into two perfectly mixed zones of the wetting compartment and drying compartment, in which the aggregation mechanism was assumed in the wetting compartment and the breakage mechanism was considered in the drying compartment. The sizes of the wetting and drying compartments were constant in the TCPBM, in which 30% of the bed was the wetting compartment and 70% of the bed was the drying compartment. The exchange rate of particles between the wetting and drying compartments was determined by the details of the flow properties and distribution of particles predicted by the computational fluid dynamics (CFD) simulation. The experimental validation has shown that the proposed TCPBM can predict evolution of the granule size and distribution within the granulator under different binder spray operating conditions accurately. Copyright © 2014 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
F. Joos
2013-03-01
Full Text Available The responses of carbon dioxide (CO2 and other climate variables to an emission pulse of CO2 into the atmosphere are often used to compute the Global Warming Potential (GWP and Global Temperature change Potential (GTP, to characterize the response timescales of Earth System models, and to build reduced-form models. In this carbon cycle-climate model intercomparison project, which spans the full model hierarchy, we quantify responses to emission pulses of different magnitudes injected under different conditions. The CO2 response shows the known rapid decline in the first few decades followed by a millennium-scale tail. For a 100 Gt-C emission pulse added to a constant CO2 concentration of 389 ppm, 25 ± 9% is still found in the atmosphere after 1000 yr; the ocean has absorbed 59 ± 12% and the land the remainder (16 ± 14%. The response in global mean surface air temperature is an increase by 0.20 ± 0.12 °C within the first twenty years; thereafter and until year 1000, temperature decreases only slightly, whereas ocean heat content and sea level continue to rise. Our best estimate for the Absolute Global Warming Potential, given by the time-integrated response in CO2 at year 100 multiplied by its radiative efficiency, is 92.5 × 10−15 yr W m−2 per kg-CO2. This value very likely (5 to 95% confidence lies within the range of (68 to 117 × 10−15 yr W m−2 per kg-CO2. Estimates for time-integrated response in CO2 published in the IPCC First, Second, and Fourth Assessment and our multi-model best estimate all agree within 15% during the first 100 yr. The integrated CO2 response, normalized by the pulse size, is lower for pre-industrial conditions, compared to present day, and lower for smaller pulses than larger pulses. In contrast, the response in temperature, sea level and ocean heat content is less sensitive to these choices. Although, choices in pulse size, background concentration, and model lead to uncertainties, the most important and
Computer modeling of liquid crystals
International Nuclear Information System (INIS)
Al-Barwani, M.S.
1999-01-01
In this thesis, we investigate several aspects of the behaviour of liquid crystal molecules near interfaces using computer simulation. We briefly discuss experiment, theoretical and computer simulation studies of some of the liquid crystal interfaces. We then describe three essentially independent research topics. The first of these concerns extensive simulations of a liquid crystal formed by long flexible molecules. We examined the bulk behaviour of the model and its structure. Studies of a film of smectic liquid crystal surrounded by vapour were also carried out. Extensive simulations were also done for a long-molecule/short-molecule mixture, studies were then carried out to investigate the liquid-vapour interface of the mixture. Next, we report the results of large scale simulations of soft-spherocylinders of two different lengths. We examined the bulk coexistence of the nematic and isotropic phases of the model. Once the bulk coexistence behaviour was known, properties of the nematic-isotropic interface were investigated. This was done by fitting order parameter and density profiles to appropriate mathematical functions and calculating the biaxial order parameter. We briefly discuss the ordering at the interfaces and make attempts to calculate the surface tension. Finally, in our third project, we study the effects of different surface topographies on creating bistable nematic liquid crystal devices. This was carried out using a model based on the discretisation of the free energy on a lattice. We use simulation to find the lowest energy states and investigate if they are degenerate in energy. We also test our model by studying the Frederiks transition and comparing with analytical and other simulation results. (author)
Computer network environment planning and analysis
Dalphin, John F.
1989-01-01
The GSFC Computer Network Environment provides a broadband RF cable between campus buildings and ethernet spines in buildings for the interlinking of Local Area Networks (LANs). This system provides terminal and computer linkage among host and user systems thereby providing E-mail services, file exchange capability, and certain distributed computing opportunities. The Environment is designed to be transparent and supports multiple protocols. Networking at Goddard has a short history and has been under coordinated control of a Network Steering Committee for slightly more than two years; network growth has been rapid with more than 1500 nodes currently addressed and greater expansion expected. A new RF cable system with a different topology is being installed during summer 1989; consideration of a fiber optics system for the future will begin soon. Summmer study was directed toward Network Steering Committee operation and planning plus consideration of Center Network Environment analysis and modeling. Biweekly Steering Committee meetings were attended to learn the background of the network and the concerns of those managing it. Suggestions for historical data gathering have been made to support future planning and modeling. Data Systems Dynamic Simulator, a simulation package developed at NASA and maintained at GSFC was studied as a possible modeling tool for the network environment. A modeling concept based on a hierarchical model was hypothesized for further development. Such a model would allow input of newly updated parameters and would provide an estimation of the behavior of the network.
Climate models on massively parallel computers
International Nuclear Information System (INIS)
Vitart, F.; Rouvillois, P.
1993-01-01
First results got on massively parallel computers (Multiple Instruction Multiple Data and Simple Instruction Multiple Data) allow to consider building of coupled models with high resolutions. This would make possible simulation of thermoaline circulation and other interaction phenomena between atmosphere and ocean. The increasing of computers powers, and then the improvement of resolution will go us to revise our approximations. Then hydrostatic approximation (in ocean circulation) will not be valid when the grid mesh will be of a dimension lower than a few kilometers: We shall have to find other models. The expert appraisement got in numerical analysis at the Center of Limeil-Valenton (CEL-V) will be used again to imagine global models taking in account atmosphere, ocean, ice floe and biosphere, allowing climate simulation until a regional scale
Computer models for economic and silvicultural decisions
Rosalie J. Ingram
1989-01-01
Computer systems can help simplify decisionmaking to manage forest ecosystems. We now have computer models to help make forest management decisions by predicting changes associated with a particular management action. Models also help you evaluate alternatives. To be effective, the computer models must be reliable and appropriate for your situation.
Dong, Hengjin; Buxton, Martin
2006-01-01
The objective of this study is to apply a Markov model to compare cost-effectiveness of total knee replacement (TKR) using computer-assisted surgery (CAS) with that of TKR using a conventional manual method in the absence of formal clinical trial evidence. A structured search was carried out to identify evidence relating to the clinical outcome, cost, and effectiveness of TKR. Nine Markov states were identified based on the progress of the disease after TKR. Effectiveness was expressed by quality-adjusted life years (QALYs). The simulation was carried out initially for 120 cycles of a month each, starting with 1,000 TKRs. A discount rate of 3.5 percent was used for both cost and effectiveness in the incremental cost-effectiveness analysis. Then, a probabilistic sensitivity analysis was carried out using a Monte Carlo approach with 10,000 iterations. Computer-assisted TKR was a long-term cost-effective technology, but the QALYs gained were small. After the first 2 years, the incremental cost per QALY of computer-assisted TKR was dominant because of cheaper and more QALYs. The incremental cost-effectiveness ratio (ICER) was sensitive to the "effect of CAS," to the CAS extra cost, and to the utility of the state "Normal health after primary TKR," but it was not sensitive to utilities of other Markov states. Both probabilistic and deterministic analyses produced similar cumulative serious or minor complication rates and complex or simple revision rates. They also produced similar ICERs. Compared with conventional TKR, computer-assisted TKR is a cost-saving technology in the long-term and may offer small additional QALYs. The "effect of CAS" is to reduce revision rates and complications through more accurate and precise alignment, and although the conclusions from the model, even when allowing for a full probabilistic analysis of uncertainty, are clear, the "effect of CAS" on the rate of revisions awaits long-term clinical evidence.
Directory of Open Access Journals (Sweden)
Elder M. Mendoza Orbegoso
2017-06-01
Full Text Available Mango is one of the most popular and best paid tropical fruits in worldwide markets, its exportation is regulated within a phytosanitary quality control for killing the “fruit fly”. Thus, mangoes must be subject to hot-water treatment process that involves their immersion in hot water over a period of time. In this work, field measurements, analytical and simulation studies are developed on available hot-water treatment equipment called “Original” that only complies with United States phytosanitary protocols. These approaches are made to characterize the fluid-dynamic and thermal behaviours that occur during the mangoes’ hot-water treatment process. Then, analytical model and Computational fluid dynamics simulations are developed for designing new hot-water treatment equipment called “Hybrid” that simultaneously meets with both United States and Japan phytosanitary certifications. Comparisons of analytical results with data field measurements demonstrate that “Hybrid” equipment offers a better fluid-dynamic and thermal performance than “Original” ones.
Directory of Open Access Journals (Sweden)
M. A. Inayathullaah
2014-01-01
Full Text Available In order to achieve high torque at low power with high efficiency, a new five-phase permanent magnet brushless DC (PMBLDC motor design was analyzed and optimized. A similar three-phase motor having the same D/L ratio (inner diameter (D and length of the stator (L is compared for maximum torque and torque ripple of the designed five-phase PMBLDC motor. Maxwell software was used to build finite element simulation model of the motor. The internal complicated magnetic field distribution and dynamic performance simulation were obtained in different positions. No load and load characteristics of the five-phase PMBLDC motor were simulated, and the power consumption of materials was computed. The conformity of the final simulation results indicates that this method can be used to provide a theoretical basis for further optimal design of this new type of motor with its drive so as to improve the starting torque and reduce torque ripple of the motor.
Rough – Granular Computing knowledge discovery models
Directory of Open Access Journals (Sweden)
Mohammed M. Eissa
2016-11-01
Full Text Available Medical domain has become one of the most important areas of research in order to richness huge amounts of medical information about the symptoms of diseases and how to distinguish between them to diagnose it correctly. Knowledge discovery models play vital role in refinement and mining of medical indicators to help medical experts to settle treatment decisions. This paper introduces four hybrid Rough – Granular Computing knowledge discovery models based on Rough Sets Theory, Artificial Neural Networks, Genetic Algorithm and Rough Mereology Theory. A comparative analysis of various knowledge discovery models that use different knowledge discovery techniques for data pre-processing, reduction, and data mining supports medical experts to extract the main medical indicators, to reduce the misdiagnosis rates and to improve decision-making for medical diagnosis and treatment. The proposed models utilized two medical datasets: Coronary Heart Disease dataset and Hepatitis C Virus dataset. The main purpose of this paper was to explore and evaluate the proposed models based on Granular Computing methodology for knowledge extraction according to different evaluation criteria for classification of medical datasets. Another purpose is to make enhancement in the frame of KDD processes for supervised learning using Granular Computing methodology.
Computational fluid dynamic modelling of cavitation
Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.
1993-01-01
Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.
Adapting computational text analysis to social science (and vice versa
Directory of Open Access Journals (Sweden)
Paul DiMaggio
2015-11-01
Full Text Available Social scientists and computer scientist are divided by small differences in perspective and not by any significant disciplinary divide. In the field of text analysis, several such differences are noted: social scientists often use unsupervised models to explore corpora, whereas many computer scientists employ supervised models to train data; social scientists hold to more conventional causal notions than do most computer scientists, and often favor intense exploitation of existing algorithms, whereas computer scientists focus more on developing new models; and computer scientists tend to trust human judgment more than social scientists do. These differences have implications that potentially can improve the practice of social science.
Cloud Computing, Tieto Cloud Server Model
Suikkanen, Saara
2013-01-01
The purpose of this study is to find out what is cloud computing. To be able to make wise decisions when moving to cloud or considering it, companies need to understand what cloud is consists of. Which model suits best to they company, what should be taken into account before moving to cloud, what is the cloud broker role and also SWOT analysis of cloud? To be able to answer customer requirements and business demands, IT companies should develop and produce new service models. IT house T...
Directory of Open Access Journals (Sweden)
Kupecki Jakub
2017-03-01
Full Text Available The article presents a numerical analysis of an innovative method for starting systems based on high temperature fuel cells. The possibility of preheating the fuel cell stacks from the cold state to the nominal working conditions encounters several limitations related to heat transfer and stability of materials. The lack of rapid and safe start-up methods limits the proliferation of MCFCs and SOFCs. For that reason, an innovative method was developed and verified using the numerical analysis presented in the paper. A dynamic 3D model was developed that enables thermo-fluidic investigations and determination of measures for shortening the preheating time of the high temperature fuel cell stacks. The model was implemented in ANSYS Fluent computational fluid dynamic (CFD software and was used for verification of the proposed start-up method. The SOFC was chosen as a reference fuel cell technology for the study. Results obtained from the study are presented and discussed.
Computational Analysis of Human Blood Flow
Panta, Yogendra; Marie, Hazel; Harvey, Mark
2009-11-01
Fluid flow modeling with commercially available computational fluid dynamics (CFD) software is widely used to visualize and predict physical phenomena related to various biological systems. In this presentation, a typical human aorta model was analyzed assuming the blood flow as laminar with complaint cardiac muscle wall boundaries. FLUENT, a commercially available finite volume software, coupled with Solidworks, a modeling software, was employed for the preprocessing, simulation and postprocessing of all the models.The analysis mainly consists of a fluid-dynamics analysis including a calculation of the velocity field and pressure distribution in the blood and a mechanical analysis of the deformation of the tissue and artery in terms of wall shear stress. A number of other models e.g. T branches, angle shaped were previously analyzed and compared their results for consistency for similar boundary conditions. The velocities, pressures and wall shear stress distributions achieved in all models were as expected given the similar boundary conditions. The three dimensional time dependent analysis of blood flow accounting the effect of body forces with a complaint boundary was also performed.
Yamaguchi, Satoshi; Yamanishi, Yasufumi; Machado, Lucas S; Matsumoto, Shuji; Tovar, Nick; Coelho, Paulo G; Thompson, Van P; Imazato, Satoshi
2018-01-01
The aim of this study was to evaluate fatigue resistance of dental fixtures with two different fixture-abutment connections by in vitro fatigue testing and in silico three-dimensional finite element analysis (3D FEA) using original computer-aided design (CAD) models. Dental implant fixtures with external connection (EX) or internal connection (IN) abutments were fabricated from original CAD models using grade IV titanium and step-stress accelerated life testing was performed. Fatigue cycles and loads were assessed by Weibull analysis, and fatigue cracking was observed by micro-computed tomography and a stereomicroscope with high dynamic range software. Using the same CAD models, displacement vectors of implant components were also analyzed by 3D FEA. Angles of the fractured line occurring at fixture platforms in vitro and of displacement vectors corresponding to the fractured line in silico were compared by two-way ANOVA. Fatigue testing showed significantly greater reliability for IN than EX (pimplant fixture platforms. FEA demonstrated that crack lines of both implant systems in vitro were observed in the same direction as displacement vectors of the implant fixtures in silico. In silico displacement vectors in the implant fixture are insightful for geometric development of dental implants to reduce complex interactions leading to fatigue failure. Copyright © 2017 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Piping stress analysis with personal computers
International Nuclear Information System (INIS)
Revesz, Z.
1987-01-01
The growing market of the personal computers is providing an increasing number of professionals with unprecedented and surprisingly inexpensive computing capacity, which if using with powerful software, can enhance immensely the engineers capabilities. This paper focuses on the possibilities which opened in piping stress analysis by the widespread distribution of personal computers, on the necessary changes in the software and on the limitations of using personal computers for engineering design and analysis. Reliability and quality assurance aspects of using personal computers for nuclear applications are also mentioned. The paper resumes with personal views of the author and experiences gained during interactive graphic piping software development for personal computers. (orig./GL)
Computer Programme for the Dynamic Analysis of Tall Regular ...
African Journals Online (AJOL)
The traditional method of dynamic analysis of tall rigid frames assumes the shear frame model. Models that allow joint rotations with/without the inclusion of the column axial loads give improved results but pose much more computational difficulty. In this work a computer program Natfrequency that determines the dynamic ...
CMS Computing Software and Analysis Challenge 2006
Energy Technology Data Exchange (ETDEWEB)
De Filippis, N. [Dipartimento interateneo di Fisica M. Merlin and INFN Bari, Via Amendola 173, 70126 Bari (Italy)
2007-10-15
The CMS (Compact Muon Solenoid) collaboration is making a big effort to test the workflow and the dataflow associated with the data handling model. With this purpose the computing, software and analysis Challenge 2006, namely CSA06, started the 15th of September. It was a 50 million event exercise that included all the steps of the analysis chain, like the prompt reconstruction, the data streaming, calibration and alignment iterative executions, the data distribution to regional sites, up to the end-user analysis. Grid tools provided by the LCG project are also experimented to gain access to the data and the resources by providing a user friendly interface to the physicists submitting the production and the analysis jobs. An overview of the status and results of the CSA06 is presented in this work.
CMS Computing Software and Analysis Challenge 2006
International Nuclear Information System (INIS)
De Filippis, N.
2007-01-01
The CMS (Compact Muon Solenoid) collaboration is making a big effort to test the workflow and the dataflow associated with the data handling model. With this purpose the computing, software and analysis Challenge 2006, namely CSA06, started the 15th of September. It was a 50 million event exercise that included all the steps of the analysis chain, like the prompt reconstruction, the data streaming, calibration and alignment iterative executions, the data distribution to regional sites, up to the end-user analysis. Grid tools provided by the LCG project are also experimented to gain access to the data and the resources by providing a user friendly interface to the physicists submitting the production and the analysis jobs. An overview of the status and results of the CSA06 is presented in this work
Process for computing geometric perturbations for probabilistic analysis
Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX
2012-04-10
A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.
Computational Chemical Synthesis Analysis and Pathway Design
Directory of Open Access Journals (Sweden)
Fan Feng
2018-06-01
Full Text Available With the idea of retrosynthetic analysis, which was raised in the 1960s, chemical synthesis analysis and pathway design have been transformed from a complex problem to a regular process of structural simplification. This review aims to summarize the developments of computer-assisted synthetic analysis and design in recent years, and how machine-learning algorithms contributed to them. LHASA system started the pioneering work of designing semi-empirical reaction modes in computers, with its following rule-based and network-searching work not only expanding the databases, but also building new approaches to indicating reaction rules. Programs like ARChem Route Designer replaced hand-coded reaction modes with automatically-extracted rules, and programs like Chematica changed traditional designing into network searching. Afterward, with the help of machine learning, two-step models which combine reaction rules and statistical methods became the main stream. Recently, fully data-driven learning methods using deep neural networks which even do not require any prior knowledge, were applied into this field. Up to now, however, these methods still cannot replace experienced human organic chemists due to their relatively low accuracies. Future new algorithms with the aid of powerful computational hardware will make this topic promising and with good prospects.
Computer models for optimizing radiation therapy
International Nuclear Information System (INIS)
Duechting, W.
1998-01-01
The aim of this contribution is to outline how methods of system analysis, control therapy and modelling can be applied to simulate normal and malignant cell growth and to optimize cancer treatment as for instance radiation therapy. Based on biological observations and cell kinetic data, several types of models have been developed describing the growth of tumor spheroids and the cell renewal of normal tissue. The irradiation model is represented by the so-called linear-quadratic model describing the survival fraction as a function of the dose. Based thereon, numerous simulation runs for different treatment schemes can be performed. Thus, it is possible to study the radiation effect on tumor and normal tissue separately. Finally, this method enables a computer-assisted recommendation for an optimal patient-specific treatment schedule prior to clinical therapy. (orig.) [de
Disciplines, models, and computers: the path to computational quantum chemistry.
Lenhard, Johannes
2014-12-01
Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.
Computer modeling of the gyrocon
International Nuclear Information System (INIS)
Tallerico, P.J.; Rankin, J.E.
1979-01-01
A gyrocon computer model is discussed in which the electron beam is followed from the gun output to the collector region. The initial beam may be selected either as a uniform circular beam or may be taken from the output of an electron gun simulated by the program of William Herrmannsfeldt. The fully relativistic equations of motion are then integrated numerically to follow the beam successively through a drift tunnel, a cylindrical rf beam deflection cavity, a combination drift space and magnetic bender region, and an output rf cavity. The parameters for each region are variable input data from a control file. The program calculates power losses in the cavity wall, power required by beam loading, power transferred from the beam to the output cavity fields, and electronic and overall efficiency. Space-charge effects are approximated if selected. Graphical displays of beam motions are produced. We discuss the Los Alamos Scientific Laboratory (LASL) prototype design as an example of code usage. The design shows a gyrocon of about two-thirds megawatt output at 450 MHz with up to 86% overall efficiency
Kolanjiyil, Arun V; Kleinstreuer, Clement
2016-12-01
Computational predictions of aerosol transport and deposition in the human respiratory tract can assist in evaluating detrimental or therapeutic health effects when inhaling toxic particles or administering drugs. However, the sheer complexity of the human lung, featuring a total of 16 million tubular airways, prohibits detailed computer simulations of the fluid-particle dynamics for the entire respiratory system. Thus, in order to obtain useful and efficient particle deposition results, an alternative modeling approach is necessary where the whole-lung geometry is approximated and physiological boundary conditions are implemented to simulate breathing. In Part I, the present new whole-lung-airway model (WLAM) represents the actual lung geometry via a basic 3-D mouth-to-trachea configuration while all subsequent airways are lumped together, i.e., reduced to an exponentially expanding 1-D conduit. The diameter for each generation of the 1-D extension can be obtained on a subject-specific basis from the calculated total volume which represents each generation of the individual. The alveolar volume was added based on the approximate number of alveoli per generation. A wall-displacement boundary condition was applied at the bottom surface of the first-generation WLAM, so that any breathing pattern due to the negative alveolar pressure can be reproduced. Specifically, different inhalation/exhalation scenarios (rest, exercise, etc.) were implemented by controlling the wall/mesh displacements to simulate realistic breathing cycles in the WLAM. Total and regional particle deposition results agree with experimental lung deposition results. The outcomes provide critical insight to and quantitative results of aerosol deposition in human whole-lung airways with modest computational resources. Hence, the WLAM can be used in analyzing human exposure to toxic particulate matter or it can assist in estimating pharmacological effects of administered drug-aerosols. As a practical
Development of a computer-assisted system for model-based condylar position analysis (E-CPM).
Ahlers, M O; Jakstat, H
2009-01-01
Condylar position analysis is a measuring method for the three-dimensional quantitative acquisition of the position of the mandible in different conditions or at different points in time. Originally, the measurement was done based on a model, using special mechanical condylar position measuring instruments, and on a research scale with mechanical-electronic measuring instruments. Today, as an alternative, it is possible to take measurements with electronic measuring instruments applied directly to the patient. The computerization of imaging has also facilitated condylar position measurement by means of three-dimensional data records obtained by imaging examination methods, which has been used in connection with the simulation and quantification of surgical operation results. However, the comparative measurement of the condylar position at different points in time has so far not been possible to the required degree. An electronic measuring instrument, allowing acquisition of the condylar position in clinical routine and facilitating later calibration with measurements from later examinations by data storage and use of precise equalizing systems, was therefore designed by the present authors. This measuring instrument was implemented on the basis of already existing components from the Reference CPM und Cadiax Compact articulator and registration systems (Gamma Dental, Klosterneuburg, Austria) as well as the matching CMD3D evaluation software (dentaConcept, Hamburg).
Quantum vertex model for reversible classical computing.
Chamon, C; Mucciolo, E R; Ruckenstein, A E; Yang, Z-C
2017-05-12
Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without 'learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.
Modeling Computer Virus and Its Dynamics
Directory of Open Access Journals (Sweden)
Mei Peng
2013-01-01
Full Text Available Based on that the computer will be infected by infected computer and exposed computer, and some of the computers which are in suscepitible status and exposed status can get immunity by antivirus ability, a novel coumputer virus model is established. The dynamic behaviors of this model are investigated. First, the basic reproduction number R0, which is a threshold of the computer virus spreading in internet, is determined. Second, this model has a virus-free equilibrium P0, which means that the infected part of the computer disappears, and the virus dies out, and P0 is a globally asymptotically stable equilibrium if R01 then this model has only one viral equilibrium P*, which means that the computer persists at a constant endemic level, and P* is also globally asymptotically stable. Finally, some numerical examples are given to demonstrate the analytical results.
ADGEN: ADjoint GENerator for computer models
Energy Technology Data Exchange (ETDEWEB)
Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.
1989-05-01
This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs.
ADGEN: ADjoint GENerator for computer models
International Nuclear Information System (INIS)
Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.
1989-05-01
This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs
Rosen, Eyal; Taschieri, Silvio; Del Fabbro, Massimo; Beitlitum, Ilan; Tsesis, Igor
2015-07-01
The aim of this study was to evaluate the diagnostic efficacy of cone-beam computed tomographic (CBCT) imaging in endodontics based on a systematic search and analysis of the literature using an efficacy model. A systematic search of the literature was performed to identify studies evaluating the use of CBCT imaging in endodontics. The identified studies were subjected to strict inclusion criteria followed by an analysis using a hierarchical model of efficacy (model) designed for appraisal of the literature on the levels of efficacy of a diagnostic imaging modality. Initially, 485 possible relevant articles were identified. After title and abstract screening and a full-text evaluation, 58 articles (12%) that met the inclusion criteria were analyzed and allocated to levels of efficacy. Most eligible articles (n = 52, 90%) evaluated technical characteristics or the accuracy of CBCT imaging, which was defined in this model as low levels of efficacy. Only 6 articles (10%) proclaimed to evaluate the efficacy of CBCT imaging to support the practitioner's decision making; treatment planning; and, ultimately, the treatment outcome, which was defined as higher levels of efficacy. The expected ultimate benefit of CBCT imaging to the endodontic patient as evaluated by its level of diagnostic efficacy is unclear and is mainly limited to its technical and diagnostic accuracy efficacies. Even for these low levels of efficacy, current knowledge is limited. Therefore, a cautious and rational approach is advised when considering CBCT imaging for endodontic purposes. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
The IceCube Computing Infrastructure Model
CERN. Geneva
2012-01-01
Besides the big LHC experiments a number of mid-size experiments is coming online which need to define new computing models to meet the demands on processing and storage requirements of those experiments. We present the hybrid computing model of IceCube which leverages GRID models with a more flexible direct user model as an example of a possible solution. In IceCube a central datacenter at UW-Madison servers as Tier-0 with a single Tier-1 datacenter at DESY Zeuthen. We describe the setup of the IceCube computing infrastructure and report on our experience in successfully provisioning the IceCube computing needs.
Computational nanophotonics modeling and applications
Musa, Sarhan M
2013-01-01
This reference offers tools for engineers, scientists, biologists, and others working with the computational techniques of nanophotonics. It introduces the key concepts of computational methods in a manner that is easily digestible for newcomers to the field. The book also examines future applications of nanophotonics in the technical industry and covers new developments and interdisciplinary research in engineering, science, and medicine. It provides an overview of the key computational nanophotonics and describes the technologies with an emphasis on how they work and their key benefits.
Computational Analysis of Pharmacokinetic Behavior of Ampicillin
Directory of Open Access Journals (Sweden)
Mária Ďurišová
2016-07-01
Full Text Available orrespondence: Institute of Experimental Pharmacology and Toxicology, Slovak Academy of Sciences, 841 04 Bratislava, Slovak Republic. Phone + 42-1254775928; Fax +421254775928; E-mail: maria.durisova@savba.sk 84 RESEARCH ARTICLE The objective of this study was to perform a computational analysis of the pharmacokinetic behavior of ampicillin, using data from the literature. A method based on the theory of dynamic systems was used for modeling purposes. The method used has been introduced to pharmacokinetics with the aim to contribute to the knowledge base in pharmacokinetics by including the modeling method which enables researchers to develop mathematical models of various pharmacokinetic processes in an identical way, using identical model structures. A few examples of a successful use of the modeling method considered here in pharmacokinetics can be found in full texts articles available free of charge at the website of the author, and in the example given in the this study. The modeling method employed in this study can be used to develop a mathematical model of the pharmacokinetic behavior of any drug, under the condition that the pharmacokinetic behavior of the drug under study can be at least partially approximated using linear models.
Transportation Research & Analysis Computing Center
Federal Laboratory Consortium — The technical objectives of the TRACC project included the establishment of a high performance computing center for use by USDOT research teams, including those from...
Computer models and output, Spartan REM: Appendix B
Marlowe, D. S.; West, E. J.
1984-01-01
A computer model of the Spartan Release Engagement Mechanism (REM) is presented in a series of numerical charts and engineering drawings. A crack growth analysis code is used to predict the fracture mechanics of critical components.
Pervasive Computing and Prosopopoietic Modelling
DEFF Research Database (Denmark)
Michelsen, Anders Ib
2011-01-01
the mid-20th century of a paradoxical distinction/complicity between the technical organisation of computed function and the human Being, in the sense of creative action upon such function. This paradoxical distinction/complicity promotes a chiastic (Merleau-Ponty) relationship of extension of one......This article treats the philosophical underpinnings of the notions of ubiquity and pervasive computing from a historical perspective. The current focus on these notions reflects the ever increasing impact of new media and the underlying complexity of computed function in the broad sense of ICT...... that have spread vertiginiously since Mark Weiser coined the term ‘pervasive’, e.g., digitalised sensoring, monitoring, effectuation, intelligence, and display. Whereas Weiser’s original perspective may seem fulfilled since computing is everywhere, in his and Seely Brown’s (1997) terms, ‘invisible...
Interface between computational fluid dynamics (CFD) and plant analysis computer codes
International Nuclear Information System (INIS)
Coffield, R.D.; Dunckhorst, F.F.; Tomlinson, E.T.; Welch, J.W.
1993-01-01
Computational fluid dynamics (CFD) can provide valuable input to the development of advanced plant analysis computer codes. The types of interfacing discussed in this paper will directly contribute to modeling and accuracy improvements throughout the plant system and should result in significant reduction of design conservatisms that have been applied to such analyses in the past
Batch Computed Tomography Analysis of Projectiles
2016-05-01
ARL-TR-7681 ● MAY 2016 US Army Research Laboratory Batch Computed Tomography Analysis of Projectiles by Michael C Golt, Chris M...Laboratory Batch Computed Tomography Analysis of Projectiles by Michael C Golt and Matthew S Bratcher Weapons and Materials Research...values to account for projectile variability in the ballistic evaluation of armor. 15. SUBJECT TERMS computed tomography , CT, BS41, projectiles
1983-09-01
F.P. PX /AMPZIJ/ REFH /AMPZIJ/ REFV /AI4PZIJ/ * RHOX /AI4PZIJ/ RHOY /At4PZIJ/ RHOZ /AI4PZIJ/ S A-ZJ SA /AMPZIJ/ SALP /AMPZIJ/ 6. CALLING ROUTINE: FLDDRV...US3NG ALGORITHM 72 COMPUTE P- YES .~:*:.~~ USING* *. 1. NAME: PLAINT (GTD) ] 2. PURPOSE: To determine if a ray traveling from a given source loca...determine if a source ray reflection from plate MP occurs. If a ray traveling from the source image location in the reflected ray direction passes through
Cohen, Sagy; Willgoose, Garry; Hancock, Greg
2009-09-01
Hillslope surface armouring and weathering processes have received little attention in geomorphologic and hydrologic models due to their complexity and the uncertainty associated with them. Their importance, however, in a wide range of spatial processes is well recognized. A physically based armouring and weathering computer model (ARMOUR) has previously been used to successfully simulate the effect of these processes on erosion and soil grading at a hillslope scale. This model is, however, computationally complex and cannot realistically be applied over large areas or over long periods of time. A simplified process conceptualization approach is presented (named mARM) which uses a novel approach of modeling physical processes using transition matrices, which is orders of magnitude faster. We describe in detail the modeling framework. We calibrate and evaluate the model against ARMOUR simulations and show it matches ARMOUR for a range of conditions. The computational efficiency of mARM allowed us to easily examine time- and space-varying relationships between erosion and physical weathering rates at the hillslope scale. For erosion-dominated slopes the surface coarsens over time, while for weathering domination the surface fines over time. When erosion and weathering are comparable in scale a slope can be weathering-dominated upslope (where runoff and therefore erosion is low) and armouring-dominated downslope. In all cases, for a constant gradient slope the surface armour coarsens downslope as a result of a balance between erosion and weathering. Thus even for weathering-dominated slopes the surface grading catena is dependent on armouring through the balance between weathering and armouring. We also observed that for many slopes the surface initially armours but, after some period of time (space- and rate-dependent), weathering begins to dominate and the surface subsequently fines. Depending on the relative magnitude of armouring and weathering the final
Computational advances in transition phase analysis
International Nuclear Information System (INIS)
Morita, K.; Kondo, S.; Tobita, Y.; Shirakawa, N.; Brear, D.J.; Fischer, E.A.
1994-01-01
In this paper, historical perspective and recent advances are reviewed on computational technologies to evaluate a transition phase of core disruptive accidents in liquid-metal fast reactors. An analysis of the transition phase requires treatment of multi-phase multi-component thermohydraulics coupled with space- and energy-dependent neutron kinetics. Such a comprehensive modeling effort was initiated when the program of SIMMER-series computer code development was initiated in the late 1970s in the USA. Successful application of the latest SIMMER-II in USA, western Europe and Japan have proved its effectiveness, but, at the same time, several areas that require further research have been identified. Based on the experience and lessons learned during the SIMMER-II application through 1980s, a new project of SIMMER-III development is underway at the Power Reactor and Nuclear Fuel Development Corporation (PNC), Japan. The models and methods of SIMMER-III are briefly described with emphasis on recent advances in multi-phase multi-component fluid dynamics technologies and their expected implication on a future reliable transition phase analysis. (author)
COMPUTER METHODS OF GENETIC ANALYSIS.
Directory of Open Access Journals (Sweden)
A. L. Osipov
2017-02-01
Full Text Available The basic statistical methods used in conducting the genetic analysis of human traits. We studied by segregation analysis, linkage analysis and allelic associations. Developed software for the implementation of these methods support.
DEFF Research Database (Denmark)
De Silva, R. T.; Pasbakhsh, Pooria; Goh, K. L.
2014-01-01
of nanotubes with fixed aspect ratio and the proposed alternative real-structure based model takes the experimentally observed variations of HNTs sizes, impurities and aspect ratios into account. The requirements of the 3-D HNTs nanocomposite models have been explored by testing idealized, real structure based...
Energy Technology Data Exchange (ETDEWEB)
Cho, Gyeong Lyeob; Kwon, Tae Gyu [Korea Energy Economics Institute, Euiwang (Korea)
1999-01-01
The present situation and characters of greenhouse gas emission in Korea was reviewed and then the theoretical analysis on pros and cons about emissions trading system and carbon tax, and estimation of reduction cost and loss of GDP using GDP model to reduce greenhouse gas was discussed. Finally a ripple effect of carbon tax and emissions trading system on balance of international payments and output per each industry was reviewed. 24 refs., 34 Figs., 30 tabs.
Climate Ocean Modeling on Parallel Computers
Wang, P.; Cheng, B. N.; Chao, Y.
1998-01-01
Ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change. However, modeling the ocean circulation at various spatial and temporal scales is a very challenging computational task.
Computational Intelligence. Mortality Models for the Actuary
Willemse, W.J.
2001-01-01
This thesis applies computational intelligence to the field of actuarial (insurance) science. In particular, this thesis deals with life insurance where mortality modelling is important. Actuaries use ancient models (mortality laws) from the nineteenth century, for example Gompertz' and Makeham's
Scaling predictive modeling in drug development with cloud computing.
Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola
2015-01-26
Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.
Applications of computer modeling to fusion research
International Nuclear Information System (INIS)
Dawson, J.M.
1989-01-01
Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling
Large Scale Computations in Air Pollution Modelling
DEFF Research Database (Denmark)
Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.
Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...
Computer Aided Continuous Time Stochastic Process Modelling
DEFF Research Database (Denmark)
Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay
2001-01-01
A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...
Directory of Open Access Journals (Sweden)
Sofia Johansson
Full Text Available Murine natural killer (NK cells express inhibitory Ly49 receptors for MHC class I molecules, which allows for "missing self" recognition of cells that downregulate MHC class I expression. During murine NK cell development, host MHC class I molecules impose an "educating impact" on the NK cell pool. As a result, mice with different MHC class I expression display different frequency distributions of Ly49 receptor combinations on NK cells. Two models have been put forward to explain this impact. The two-step selection model proposes a stochastic Ly49 receptor expression followed by selection for NK cells expressing appropriate receptor combinations. The sequential model, on the other hand, proposes that each NK cell sequentially expresses Ly49 receptors until an interaction of sufficient magnitude with self-class I MHC is reached for the NK cell to mature. With the aim to clarify which one of these models is most likely to reflect the actual biological process, we simulated the two educational schemes by mathematical modelling, and fitted the results to Ly49 expression patterns, which were analyzed in mice expressing single MHC class I molecules. Our results favour the two-step selection model over the sequential model. Furthermore, the MHC class I environment favoured maturation of NK cells expressing one or a few self receptors, suggesting a possible step of positive selection in NK cell education. Based on the predicted Ly49 binding preferences revealed by the model, we also propose, that Ly49 receptors are more promiscuous than previously thought in their interactions with MHC class I molecules, which was supported by functional studies of NK cell subsets expressing individual Ly49 receptors.
Introduction to computation and modeling for differential equations
Edsberg, Lennart
2008-01-01
An introduction to scientific computing for differential equationsIntroduction to Computation and Modeling for Differential Equations provides a unified and integrated view of numerical analysis, mathematical modeling in applications, and programming to solve differential equations, which is essential in problem-solving across many disciplines, such as engineering, physics, and economics. This book successfully introduces readers to the subject through a unique ""Five-M"" approach: Modeling, Mathematics, Methods, MATLAB, and Multiphysics. This approach facilitates a thorough understanding of h
Directory of Open Access Journals (Sweden)
I. V. Kachanov
2015-01-01
Full Text Available The modern development of industrial production is closely connected with the use of science-based and high technologies to ensure competitiveness of the manufactured products on the world market. There is also much tension around an energy- and resource saving problem which can be solved while introducing new technological processes and creation of new materials that provide productivity increase through automation and improvement of tool life. Development and implementation of such technologies are rather often considered as time-consuming processes which are connected with complex calculations and experimental investigations. Implementation of a simulation modelling for materials processing using modern software products serves an alternative to experimental and theoretical methods of research.The aim of this paper is to compare experimental results while obtaining bimetallic samples of a forming tool through the method of speed hot extrusion and the results obtained with the help of computer simulation using DEFORM-3D package and a finite element method. Comparative analysis of plastic flow of real and model samples has shown that the obtained models provide high-quality and reliable picture of plastic flow during high-speed hot extrusion. Modeling in DEFORM-3D make it possible to eliminate complex calculations and significantly reduce a number of experimental studies while developing new technological processes.
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
where x increases from zero to N, the saturation value. Box 1. Matrix Meth- ... such as Laplace transforms and non-linear differential equa- tions with .... atomic bomb project in the. US in the early ... his work on game theory and computers.
Shulgina, T. M.; Gordova, Y. E.; Martynova, Y. V.
2014-12-01
A problem of making education relevant to the workplace tasks is a key problem of higher education in the professional field of environmental sciences. To answer this challenge several new courses for students of "Climatology" and "Meteorology" specialties were developed and implemented at the Tomsk State University, which comprises theoretical knowledge from up-to-date environmental sciences with computational tasks. To organize the educational process we use an open-source course management system Moodle (www.moodle.org). It gave us an opportunity to combine text and multimedia in a theoretical part of educational courses. The hands-on approach is realized through development of innovative trainings which are performed within the information-computational web GIS platform "Climate" (http://climate.scert.ru/). The platform has a set of tools and data bases allowing a researcher to perform climate changes analysis on the selected territory. The tools are also used for students' trainings, which contain practical tasks on climate modeling and climate changes assessment and analysis. Laboratory exercises are covering three topics: "Analysis of regional climate changes"; "Analysis of climate extreme indices on the regional scale"; and "Analysis of future climate". They designed to consolidate students' knowledge of discipline, to instill in them the skills to work independently with large amounts of geophysical data using modern processing and analysis tools of web-GIS platform "Climate" and to train them to present results obtained on laboratory work as reports with the statement of the problem, the results of calculations and logically justified conclusion. Thus, students are engaged in n the use of modern tools of the geophysical data analysis and it cultivates dynamic of their professional learning. The approach can help us to fill in this gap because it is the only approach that offers experience, increases students involvement, advance the use of modern
Social media modeling and computing
Hoi, Steven CH; Boll, Susanne; Xu, Dong; Jin, Rong; King, Irwin
2011-01-01
Presents contributions from an international selection of preeminent experts in the field Discusses topics on social-media content analysis, and examines social-media system design and analysis Describes emerging applications of social media
International Nuclear Information System (INIS)
Botelho, D.A.; Moreira, M.L.
1991-06-01
The Reynolds turbulent transport equations for an incompressible fluid are integrated on a bi-dimensional staggered grid, for velocity and pressure, using the SIMPLER method. With the resulting algebraic relations it was developed the TURBO program, which final objectives are the thermal stratification and natural convection analysis of nuclear reactor pools. This program was tested in problems applications with analytic or experimental solutions previously known. (author)
Energy Technology Data Exchange (ETDEWEB)
Silva, Alexandre M. da; Balestieri, Jose A.P.; Magalhaes Filho, Paulo [UNESP, Guaratingueta, SP (Brazil). Escola de Engenharia. Dept. de Energia]. E-mails: amarcial@uol.com.br; perella@feg.unesp.br; pfilho@feg.unesp.br
2000-07-01
This paper presents the use of computational resources in a simulation procedure to predict the performance of combined cycle cogeneration systems in which energetic analysis is used in the modeling. Thermal demand of a consuming process are used as the main entrance data and, associated to the performance characteristics of each component of the system, it is evaluated the influence of some parameters of the system such as thermal efficiency and global efficiency. The computational language is Visual Basic for Applications associated to an electronic sheet. Two combined cycle cogeneration schemes are pre-defined: one is composed of a gas turbine, heat recovery steam generator and a back pressure steam turbine with one extraction, in which both are connected to the different pressure level process plant; the other scheme has a difference a two extraction-condensing steam turbine instead of the back pressure one. Some illustrative graphics are generated for allowing comparison of the appraised systems. The strategy of the system simulation is obtained by carefully linking the information of various components according to the flow diagrams. (author)
Petri Net Modeling of Computer Virus Life Cycle | Ikekonwu ...
African Journals Online (AJOL)
Virus life cycle, which refers to the stages of development of a computer virus, is presented as a suitable area for the application of Petri nets. Petri nets a powerful modeling tool in the field of dynamic system analysis is applied to model the virus life cycle. Simulation of the derived model is also presented. The intention of ...
Geometric modeling for computer aided design
Schwing, James L.; Olariu, Stephen
1995-01-01
The primary goal of this grant has been the design and implementation of software to be used in the conceptual design of aerospace vehicles particularly focused on the elements of geometric design, graphical user interfaces, and the interaction of the multitude of software typically used in this engineering environment. This has resulted in the development of several analysis packages and design studies. These include two major software systems currently used in the conceptual level design of aerospace vehicles. These tools are SMART, the Solid Modeling Aerospace Research Tool, and EASIE, the Environment for Software Integration and Execution. Additional software tools were designed and implemented to address the needs of the engineer working in the conceptual design environment. SMART provides conceptual designers with a rapid prototyping capability and several engineering analysis capabilities. In addition, SMART has a carefully engineered user interface that makes it easy to learn and use. Finally, a number of specialty characteristics have been built into SMART which allow it to be used efficiently as a front end geometry processor for other analysis packages. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand-alone, analysis codes. Resulting in a streamlining of the exchange of data between programs reducing errors and improving the efficiency. EASIE provides both a methodology and a collection of software tools to ease the task of coordinating engineering design and analysis codes.
Computational systems analysis of dopamine metabolism.
Directory of Open Access Journals (Sweden)
Zhen Qi
2008-06-01
Full Text Available A prominent feature of Parkinson's disease (PD is the loss of dopamine in the striatum, and many therapeutic interventions for the disease are aimed at restoring dopamine signaling. Dopamine signaling includes the synthesis, storage, release, and recycling of dopamine in the presynaptic terminal and activation of pre- and post-synaptic receptors and various downstream signaling cascades. As an aid that might facilitate our understanding of dopamine dynamics in the pathogenesis and treatment in PD, we have begun to merge currently available information and expert knowledge regarding presynaptic dopamine homeostasis into a computational model, following the guidelines of biochemical systems theory. After subjecting our model to mathematical diagnosis and analysis, we made direct comparisons between model predictions and experimental observations and found that the model exhibited a high degree of predictive capacity with respect to genetic and pharmacological changes in gene expression or function. Our results suggest potential approaches to restoring the dopamine imbalance and the associated generation of oxidative stress. While the proposed model of dopamine metabolism is preliminary, future extensions and refinements may eventually serve as an in silico platform for prescreening potential therapeutics, identifying immediate side effects, screening for biomarkers, and assessing the impact of risk factors of the disease.
Computer-Aided Modelling Methods and Tools
DEFF Research Database (Denmark)
Cameron, Ian; Gani, Rafiqul
2011-01-01
The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...
A Categorisation of Cloud Computing Business Models
Chang, Victor; Bacigalupo, David; Wills, Gary; De Roure, David
2010-01-01
This paper reviews current cloud computing business models and presents proposals on how organisations can achieve sustainability by adopting appropriate models. We classify cloud computing business models into eight types: (1) Service Provider and Service Orientation; (2) Support and Services Contracts; (3) In-House Private Clouds; (4) All-In-One Enterprise Cloud; (5) One-Stop Resources and Services; (6) Government funding; (7) Venture Capitals; and (8) Entertainment and Social Networking. U...
A computational model of selection by consequences.
McDowell, J J
2004-01-01
Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied o...
Creation of 'Ukrytie' objects computer model
International Nuclear Information System (INIS)
Mazur, A.B.; Kotlyarov, V.T.; Ermolenko, A.I.; Podbereznyj, S.S.; Postil, S.D.; Shaptala, D.V.
1999-01-01
A partial computer model of the 'Ukrytie' object was created with the use of geoinformation technologies. The computer model makes it possible to carry out information support of the works related to the 'Ukrytie' object stabilization and its conversion into ecologically safe system for analyzing, forecasting and controlling the processes occurring in the 'Ukrytie' object. Elements and structures of the 'Ukryttia' object were designed and input into the model
Computational models in physics teaching: a framework
Directory of Open Access Journals (Sweden)
Marco Antonio Moreira
2012-08-01
Full Text Available The purpose of the present paper is to present a theoretical framework to promote and assist meaningful physics learning through computational models. Our proposal is based on the use of a tool, the AVM diagram, to design educational activities involving modeling and computer simulations. The idea is to provide a starting point for the construction and implementation of didactical approaches grounded in a coherent epistemological view about scientific modeling.
IUE Data Analysis Software for Personal Computers
Thompson, R.; Caplinger, J.; Taylor, L.; Lawton , P.
1996-01-01
This report summarizes the work performed for the program titled, "IUE Data Analysis Software for Personal Computers" awarded under Astrophysics Data Program NRA 92-OSSA-15. The work performed was completed over a 2-year period starting in April 1994. As a result of the project, 450 IDL routines and eight database tables are now available for distribution for Power Macintosh computers and Personal Computers running Windows 3.1.
Case studies in Gaussian process modelling of computer codes
International Nuclear Information System (INIS)
Kennedy, Marc C.; Anderson, Clive W.; Conti, Stefano; O'Hagan, Anthony
2006-01-01
In this paper we present a number of recent applications in which an emulator of a computer code is created using a Gaussian process model. Tools are then applied to the emulator to perform sensitivity analysis and uncertainty analysis. Sensitivity analysis is used both as an aid to model improvement and as a guide to how much the output uncertainty might be reduced by learning about specific inputs. Uncertainty analysis allows us to reflect output uncertainty due to unknown input parameters, when the finished code is used for prediction. The computer codes themselves are currently being developed within the UK Centre for Terrestrial Carbon Dynamics
Accident sequence analysis of human-computer interface design
International Nuclear Information System (INIS)
Fan, C.-F.; Chen, W.-H.
2000-01-01
It is important to predict potential accident sequences of human-computer interaction in a safety-critical computing system so that vulnerable points can be disclosed and removed. We address this issue by proposing a Multi-Context human-computer interaction Model along with its analysis techniques, an Augmented Fault Tree Analysis, and a Concurrent Event Tree Analysis. The proposed augmented fault tree can identify the potential weak points in software design that may induce unintended software functions or erroneous human procedures. The concurrent event tree can enumerate possible accident sequences due to these weak points
Directory of Open Access Journals (Sweden)
Utro Filippo
2008-10-01
Full Text Available Abstract Background Inferring cluster structure in microarray datasets is a fundamental task for the so-called -omic sciences. It is also a fundamental question in Statistics, Data Analysis and Classification, in particular with regard to the prediction of the number of clusters in a dataset, usually established via internal validation measures. Despite the wealth of internal measures available in the literature, new ones have been recently proposed, some of them specifically for microarray data. Results We consider five such measures: Clest, Consensus (Consensus Clustering, FOM (Figure of Merit, Gap (Gap Statistics and ME (Model Explorer, in addition to the classic WCSS (Within Cluster Sum-of-Squares and KL (Krzanowski and Lai index. We perform extensive experiments on six benchmark microarray datasets, using both Hierarchical and K-means clustering algorithms, and we provide an analysis assessing both the intrinsic ability of a measure to predict the correct number of clusters in a dataset and its merit relative to the other measures. We pay particular attention both to precision and speed. Moreover, we also provide various fast approximation algorithms for the computation of Gap, FOM and WCSS. The main result is a hierarchy of those measures in terms of precision and speed, highlighting some of their merits and limitations not reported before in the literature. Conclusion Based on our analysis, we draw several conclusions for the use of those internal measures on microarray data. We report the main ones. Consensus is by far the best performer in terms of predictive power and remarkably algorithm-independent. Unfortunately, on large datasets, it may be of no use because of its non-trivial computer time demand (weeks on a state of the art PC. FOM is the second best performer although, quite surprisingly, it may not be competitive in this scenario: it has essentially the same predictive power of WCSS but it is from 6 to 100 times slower in time
Aydiner, Ekrem; Cherstvy, Andrey G.; Metzler, Ralf
2018-01-01
We study by Monte Carlo simulations a kinetic exchange trading model for both fixed and distributed saving propensities of the agents and rationalize the person and wealth distributions. We show that the newly introduced wealth distribution - that may be more amenable in certain situations - features a different power-law exponent, particularly for distributed saving propensities of the agents. For open agent-based systems, we analyze the person and wealth distributions and find that the presence of trap agents alters their amplitude, leaving however the scaling exponents nearly unaffected. For an open system, we show that the total wealth - for different trap agent densities and saving propensities of the agents - decreases in time according to the classical Kohlrausch-Williams-Watts stretched exponential law. Interestingly, this decay does not depend on the trap agent density, but rather on saving propensities. The system relaxation for fixed and distributed saving schemes are found to be different.
DEFF Research Database (Denmark)
Lim, Young-il; Jørgensen, Sten Bay; Kim, In-Ho
2005-01-01
differential algebraic equation (PDAE) system, a fast and accurate numerical method (i.e., conservation element/solution element (CE/SE) method), is proposed. Sensitivity and elasticity of the model parameters (e.g., steric/shape factors, adsorption heat coefficient, effective protein charge, equilibrium...... constant, mass transfer coefficient, axial dispersion coefficient and bed voidage) are analyzed for a BSA-salt system in a low protein concentration range. Within a low concentration range of bovine serum albumin (BSA) where linear adsorption isotherms are shown, the adsorption heat coefficient, shape...... salt concentrations, it is proposed that the effective protein charge could depend upon the salt concentration (or ionic strength). The reason for this dependence may be a steric hindrance of protein binding sites combined with a salt shielding effect neutralizing the surface charges of the protein. (c...
Lewnard, Joseph A.; Antillón, Marina; Gonsalves, Gregg; Miller, Alice M.; Ko, Albert I.; Pitzer, Virginia E.
2016-01-01
Background Introduction of Vibrio cholerae to Haiti during the deployment of United Nations (UN) peacekeepers in 2010 resulted in one of the largest cholera epidemics of the modern era. Following the outbreak, a UN-commissioned independent panel recommended three pre-deployment intervention strategies to minimize the risk of cholera introduction in future peacekeeping operations: screening for V. cholerae carriage, administering prophylactic antimicrobial chemotherapies, or immunizing with oral cholera vaccines. However, uncertainty regarding the effectiveness of these approaches has forestalled their implementation by the UN. We assessed how the interventions would have impacted the likelihood of the Haiti cholera epidemic. Methods and Findings We developed a stochastic model for cholera importation and transmission, fitted to reported cases during the first weeks of the 2010 outbreak in Haiti. Using this model, we estimated that diagnostic screening reduces the probability of cases occurring by 82% (95% credible interval: 75%, 85%); however, false-positive test outcomes may hamper this approach. Antimicrobial chemoprophylaxis at time of departure and oral cholera vaccination reduce the probability of cases by 50% (41%, 57%) and by up to 61% (58%, 63%), respectively. Chemoprophylaxis beginning 1 wk before departure confers a 91% (78%, 96%) reduction independently, and up to a 98% reduction (94%, 99%) if coupled with vaccination. These results are not sensitive to assumptions about the background cholera incidence rate in the endemic troop-sending country. Further research is needed to (1) validate the sensitivity and specificity of rapid test approaches for detecting asymptomatic carriage, (2) compare prophylactic efficacy across antimicrobial regimens, and (3) quantify the impact of oral cholera vaccine on transmission from asymptomatic carriers. Conclusions Screening, chemoprophylaxis, and vaccination are all effective strategies to prevent cholera introduction
Ranked retrieval of Computational Biology models.
Henkel, Ron; Endler, Lukas; Peters, Andre; Le Novère, Nicolas; Waltemath, Dagmar
2010-08-11
The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models.
Lönnberg, Tapio; Svensson, Valentine; James, Kylie R; Fernandez-Ruiz, Daniel; Sebina, Ismail; Montandon, Ruddy; Soon, Megan S F; Fogg, Lily G; Nair, Arya Sheela; Liligeto, Urijah; Stubbington, Michael J T; Ly, Lam-Ha; Bagger, Frederik Otzen; Zwiessele, Max; Lawrence, Neil D; Souza-Fonseca-Guimaraes, Fernando; Bunn, Patrick T; Engwerda, Christian R; Heath, William R; Billker, Oliver; Stegle, Oliver; Haque, Ashraful; Teichmann, Sarah A
2017-03-03
Differentiation of naïve CD4 + T cells into functionally distinct T helper subsets is crucial for the orchestration of immune responses. Due to extensive heterogeneity and multiple overlapping transcriptional programs in differentiating T cell populations, this process has remained a challenge for systematic dissection in vivo . By using single-cell transcriptomics and computational analysis using a temporal mixtures of Gaussian processes model, termed GPfates, we reconstructed the developmental trajectories of Th1 and Tfh cells during blood-stage Plasmodium infection in mice. By tracking clonality using endogenous TCR sequences, we first demonstrated that Th1/Tfh bifurcation had occurred at both population and single-clone levels. Next, we identified genes whose expression was associated with Th1 or Tfh fates, and demonstrated a T-cell intrinsic role for Galectin-1 in supporting a Th1 differentiation. We also revealed the close molecular relationship between Th1 and IL-10-producing Tr1 cells in this infection. Th1 and Tfh fates emerged from a highly proliferative precursor that upregulated aerobic glycolysis and accelerated cell cycling as cytokine expression began. Dynamic gene expression of chemokine receptors around bifurcation predicted roles for cell-cell in driving Th1/Tfh fates. In particular, we found that precursor Th cells were coached towards a Th1 but not a Tfh fate by inflammatory monocytes. Thus, by integrating genomic and computational approaches, our study has provided two unique resources, a database www.PlasmoTH.org, which facilitates discovery of novel factors controlling Th1/Tfh fate commitment, and more generally, GPfates, a modelling framework for characterizing cell differentiation towards multiple fates.
Computational methods for corpus annotation and analysis
Lu, Xiaofei
2014-01-01
This book reviews computational tools for lexical, syntactic, semantic, pragmatic and discourse analysis, with instructions on how to obtain, install and use each tool. Covers studies using Natural Language Processing, and offers ideas for better integration.
Applied time series analysis and innovative computing
Ao, Sio-Iong
2010-01-01
This text is a systematic, state-of-the-art introduction to the use of innovative computing paradigms as an investigative tool for applications in time series analysis. It includes frontier case studies based on recent research.
Computational challenges in modeling gene regulatory events.
Pataskar, Abhijeet; Tiwari, Vijay K
2016-10-19
Cellular transcriptional programs driven by genetic and epigenetic mechanisms could be better understood by integrating "omics" data and subsequently modeling the gene-regulatory events. Toward this end, computational biology should keep pace with evolving experimental procedures and data availability. This article gives an exemplified account of the current computational challenges in molecular biology.
Computational methods in power system analysis
Idema, Reijer
2014-01-01
This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.
Notions of similarity for computational biology models
Waltemath, Dagmar
2016-03-21
Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.
Notions of similarity for computational biology models
Waltemath, Dagmar; Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knuepfer, Christian; Liebermeister, Wolfram
2016-01-01
Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users' intuition about model similarity, and to support complex model searches in databases.
Predictive Capability Maturity Model for computational modeling and simulation.
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.
2007-10-01
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.
Energy Technology Data Exchange (ETDEWEB)
This report is summarizing the findings and documenting the programs developed by Economic Regulatory Administration (ERA) for the Crude Oil Profile Analysis. Based on a variety of estimates and assumptions from different sources, ERA developed a series of computer programs which generate a 24-month projection of the crude oil production in each of the price categories. AVANTE International Systems Corporation is contracted to provide an organized documentation for all of these data base, assumptions, estimation rationale and computing procedures.
Predictive Models and Computational Embryology
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
Hybrid soft computing systems for electromyographic signals analysis: a review
2014-01-01
Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis. PMID:24490979
Hybrid soft computing systems for electromyographic signals analysis: a review.
Xie, Hong-Bo; Guo, Tianruo; Bai, Siwei; Dokos, Socrates
2014-02-03
Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis.
Kaznowska, E; Depciuch, J; Łach, K; Kołodziej, M; Koziorowska, A; Vongsvivut, J; Zawlik, I; Cholewa, M; Cebulski, J
2018-08-15
Lung cancer has the highest mortality rate of all malignant tumours. The current effects of cancer treatment, as well as its diagnostics, are unsatisfactory. Therefore it is very important to introduce modern diagnostic tools, which will allow for rapid classification of lung cancers and their degree of malignancy. For this purpose, the authors propose the use of Fourier Transform InfraRed (FTIR) spectroscopy combined with Principal Component Analysis-Linear Discriminant Analysis (PCA-LDA) and a physics-based computational model. The results obtained for lung cancer tissues, adenocarcinoma and squamous cell carcinoma FTIR spectra, show a shift in wavenumbers compared to control tissue FTIR spectra. Furthermore, in the FTIR spectra of adenocarcinoma there are no peaks corresponding to glutamate or phospholipid functional groups. Moreover, in the case of G2 and G3 malignancy of adenocarcinoma lung cancer, the absence of an OH groups peak was noticed. Thus, it seems that FTIR spectroscopy is a valuable tool to classify lung cancer and to determine the degree of its malignancy. Copyright © 2018 Elsevier B.V. All rights reserved.
Distributed computing and nuclear reactor analysis
International Nuclear Information System (INIS)
Brown, F.B.; Derstine, K.L.; Blomquist, R.N.
1994-01-01
Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations
den Harder, Annemarie M; Willemink, Martin J; van Hamersvelt, Robbert W; Vonken, Evertjan P A; Schilham, Arnold M R; Lammers, Jan-Willem J; Luijk, Bart; Budde, Ricardo P J; Leiner, Tim; de Jong, Pim A
2016-01-01
The aim of the study was to determine the effects of dose reduction and iterative reconstruction (IR) on pulmonary nodule volumetry. In this prospective study, 25 patients scheduled for follow-up of pulmonary nodules were included. Computed tomography acquisitions were acquired at 4 dose levels with a median of 2.1, 1.2, 0.8, and 0.6 mSv. Data were reconstructed with filtered back projection (FBP), hybrid IR, and model-based IR. Volumetry was performed using semiautomatic software. At the highest dose level, more than 91% (34/37) of the nodules could be segmented, and at the lowest dose level, this was more than 83%. Thirty-three nodules were included for further analysis. Filtered back projection and hybrid IR did not lead to significant differences, whereas model-based IR resulted in lower volume measurements with a maximum difference of -11% compared with FBP at routine dose. Pulmonary nodule volumetry can be accurately performed at a submillisievert dose with both FBP and hybrid IR.
Computer simulations of the random barrier model
DEFF Research Database (Denmark)
Schrøder, Thomas; Dyre, Jeppe
2002-01-01
A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...
Computer assisted functional analysis. Computer gestuetzte funktionelle Analyse
Energy Technology Data Exchange (ETDEWEB)
Schmidt, H A.E.; Roesler, H
1982-01-01
The latest developments in computer-assisted functional analysis (CFA) in nuclear medicine are presented in about 250 papers of the 19th international annual meeting of the Society of Nuclear Medicine (Bern, September 1981). Apart from the mathematical and instrumental aspects of CFA, computerized emission tomography is given particular attention. Advances in nuclear medical diagnosis in the fields of radiopharmaceuticals, cardiology, angiology, neurology, ophthalmology, pulmonology, gastroenterology, nephrology, endocrinology, oncology and osteology are discussed.
Computational Intelligence in Intelligent Data Analysis
Nürnberger, Andreas
2013-01-01
Complex systems and their phenomena are ubiquitous as they can be found in biology, finance, the humanities, management sciences, medicine, physics and similar fields. For many problems in these fields, there are no conventional ways to mathematically or analytically solve them completely at low cost. On the other hand, nature already solved many optimization problems efficiently. Computational intelligence attempts to mimic nature-inspired problem-solving strategies and methods. These strategies can be used to study, model and analyze complex systems such that it becomes feasible to handle them. Key areas of computational intelligence are artificial neural networks, evolutionary computation and fuzzy systems. As only a few researchers in that field, Rudolf Kruse has contributed in many important ways to the understanding, modeling and application of computational intelligence methods. On occasion of his 60th birthday, a collection of original papers of leading researchers in the field of computational intell...
Multivariate analysis: models and method
International Nuclear Information System (INIS)
Sanz Perucha, J.
1990-01-01
Data treatment techniques are increasingly used since computer methods result of wider access. Multivariate analysis consists of a group of statistic methods that are applied to study objects or samples characterized by multiple values. A final goal is decision making. The paper describes the models and methods of multivariate analysis
Computer System Analysis for Decommissioning Management of Nuclear Reactor
International Nuclear Information System (INIS)
Nurokhim; Sumarbagiono
2008-01-01
Nuclear reactor decommissioning is a complex activity that should be planed and implemented carefully. A system based on computer need to be developed to support nuclear reactor decommissioning. Some computer systems have been studied for management of nuclear power reactor. Software system COSMARD and DEXUS that have been developed in Japan and IDMT in Italy used as models for analysis and discussion. Its can be concluded that a computer system for nuclear reactor decommissioning management is quite complex that involved some computer code for radioactive inventory database calculation, calculation module on the stages of decommissioning phase, and spatial data system development for virtual reality. (author)
Models of parallel computation :a survey and classification
Institute of Scientific and Technical Information of China (English)
ZHANG Yunquan; CHEN Guoliang; SUN Guangzhong; MIAO Qiankun
2007-01-01
In this paper,the state-of-the-art parallel computational model research is reviewed.We will introduce various models that were developed during the past decades.According to their targeting architecture features,especially memory organization,we classify these parallel computational models into three generations.These models and their characteristics are discussed based on three generations classification.We believe that with the ever increasing speed gap between the CPU and memory systems,incorporating non-uniform memory hierarchy into computational models will become unavoidable.With the emergence of multi-core CPUs,the parallelism hierarchy of current computing platforms becomes more and more complicated.Describing this complicated parallelism hierarchy in future computational models becomes more and more important.A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity,thus allowing more complicated models with more parameters to be adopted.Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.
Computational Modeling of Culture's Consequences
Hofstede, G.J.; Jonker, C.M.; Verwaart, T.
2010-01-01
This paper presents an approach to formalize the influence of culture on the decision functions of agents in social simulations. The key components are (a) a definition of the domain of study in the form of a decision model, (b) knowledge acquisition based on a dimensional theory of culture,
Computational aspects of premixing modelling
Energy Technology Data Exchange (ETDEWEB)
Fletcher, D.F. [Sydney Univ., NSW (Australia). Dept. of Chemical Engineering; Witt, P.J.
1998-01-01
In the steam explosion research field there is currently considerable effort being devoted to the modelling of premixing. Practically all models are based on the multiphase flow equations which treat the mixture as an interpenetrating continuum. Solution of these equations is non-trivial and a wide range of solution procedures are in use. This paper addresses some numerical aspects of this problem. In particular, we examine the effect of the differencing scheme for the convective terms and show that use of hybrid differencing can cause qualitatively wrong solutions in some situations. Calculations are performed for the Oxford tests, the BNL tests, a MAGICO test and to investigate various sensitivities of the solution. In addition, we show that use of a staggered grid can result in a significant error which leads to poor predictions of `melt` front motion. A correction is given which leads to excellent convergence to the analytic solution. Finally, we discuss the issues facing premixing model developers and highlight the fact that model validation is hampered more by the complexity of the process than by numerical issues. (author)
Computational modeling of concrete flow
DEFF Research Database (Denmark)
Roussel, Nicolas; Geiker, Mette Rica; Dufour, Frederic
2007-01-01
particle flow, and numerical techniques allowing the modeling of particles suspended in a fluid. The general concept behind each family of techniques is described. Pros and cons for each technique are given along with examples and references to applications to fresh cementitious materials....
DFT computational analysis of piracetam
Rajesh, P.; Gunasekaran, S.; Seshadri, S.; Gnanasambandan, T.
2014-11-01
Density functional theory calculation with B3LYP using 6-31G(d,p) and 6-31++G(d,p) basis set have been used to determine ground state molecular geometries. The first order hyperpolarizability (β0) and related properties (β, α0 and Δα) of piracetam is calculated using B3LYP/6-31G(d,p) method on the finite-field approach. The stability of molecule has been analyzed by using NBO/NLMO analysis. The calculation of first hyperpolarizability shows that the molecule is an attractive molecule for future applications in non-linear optics. Molecular electrostatic potential (MEP) at a point in the space around a molecule gives an indication of the net electrostatic effect produced at that point by the total charge distribution of the molecule. The calculated HOMO and LUMO energies show that charge transfer occurs within these molecules. Mulliken population analysis on atomic charge is also calculated. Because of vibrational analysis, the thermodynamic properties of the title compound at different temperatures have been calculated. Finally, the UV-Vis spectra and electronic absorption properties are explained and illustrated from the frontier molecular orbitals.
Computer Modeling of Direct Metal Laser Sintering
Cross, Matthew
2014-01-01
A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.
Model to Implement Virtual Computing Labs via Cloud Computing Services
Directory of Open Access Journals (Sweden)
Washington Luna Encalada
2017-07-01
Full Text Available In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs, and bring your own device (BYOD are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the reproduction of the benefits of an educational institution’s physical laboratory. For a university without a computing lab, to obtain hands-on IT training with software, operating systems, networks, servers, storage, and cloud computing similar to that which could be received on a university campus computing lab, it is necessary to use a combination of technological tools. Such teaching tools must promote the transmission of knowledge, encourage interaction and collaboration, and ensure students obtain valuable hands-on experience. That, in turn, allows the universities to focus more on teaching and research activities than on the implementation and configuration of complex physical systems. In this article, we present a model for implementing ecosystems which allow universities to teach practical Information Technology (IT skills. The model utilizes what is called a “social cloud”, which utilizes all cloud computing services, such as Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS. Additionally, it integrates the cloud learning aspects of a MOOC and several aspects of social networking and support. Social clouds have striking benefits such as centrality, ease of use, scalability, and ubiquity, providing a superior learning environment when compared to that of a simple physical lab. The proposed model allows students to foster all the educational pillars such as learning to know, learning to be, learning
Parallel Computing for Terrestrial Ecosystem Carbon Modeling
International Nuclear Information System (INIS)
Wang, Dali; Post, Wilfred M.; Ricciuto, Daniel M.; Berry, Michael
2011-01-01
Terrestrial ecosystems are a primary component of research on global environmental change. Observational and modeling research on terrestrial ecosystems at the global scale, however, has lagged behind their counterparts for oceanic and atmospheric systems, largely because the unique challenges associated with the tremendous diversity and complexity of terrestrial ecosystems. There are 8 major types of terrestrial ecosystem: tropical rain forest, savannas, deserts, temperate grassland, deciduous forest, coniferous forest, tundra, and chaparral. The carbon cycle is an important mechanism in the coupling of terrestrial ecosystems with climate through biological fluxes of CO 2 . The influence of terrestrial ecosystems on atmospheric CO 2 can be modeled via several means at different timescales. Important processes include plant dynamics, change in land use, as well as ecosystem biogeography. Over the past several decades, many terrestrial ecosystem models (see the 'Model developments' section) have been developed to understand the interactions between terrestrial carbon storage and CO 2 concentration in the atmosphere, as well as the consequences of these interactions. Early TECMs generally adapted simple box-flow exchange models, in which photosynthetic CO 2 uptake and respiratory CO 2 release are simulated in an empirical manner with a small number of vegetation and soil carbon pools. Demands on kinds and amount of information required from global TECMs have grown. Recently, along with the rapid development of parallel computing, spatially explicit TECMs with detailed process based representations of carbon dynamics become attractive, because those models can readily incorporate a variety of additional ecosystem processes (such as dispersal, establishment, growth, mortality etc.) and environmental factors (such as landscape position, pest populations, disturbances, resource manipulations, etc.), and provide information to frame policy options for climate change
Computer-Aided Communication Satellite System Analysis and Optimization.
Stagl, Thomas W.; And Others
Various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. The rationale for selecting General Dynamics/Convair's Satellite Telecommunication Analysis and Modeling Program (STAMP) in modified form to aid in the system costing and sensitivity analysis work in the Program on…
Computational modeling of epiphany learning.
Chen, Wei James; Krajbich, Ian
2017-05-02
Models of reinforcement learning (RL) are prevalent in the decision-making literature, but not all behavior seems to conform to the gradual convergence that is a central feature of RL. In some cases learning seems to happen all at once. Limited prior research on these "epiphanies" has shown evidence of sudden changes in behavior, but it remains unclear how such epiphanies occur. We propose a sequential-sampling model of epiphany learning (EL) and test it using an eye-tracking experiment. In the experiment, subjects repeatedly play a strategic game that has an optimal strategy. Subjects can learn over time from feedback but are also allowed to commit to a strategy at any time, eliminating all other options and opportunities to learn. We find that the EL model is consistent with the choices, eye movements, and pupillary responses of subjects who commit to the optimal strategy (correct epiphany) but not always of those who commit to a suboptimal strategy or who do not commit at all. Our findings suggest that EL is driven by a latent evidence accumulation process that can be revealed with eye-tracking data.
Componential analysis of kinship terminology a computational perspective
Pericliev, V
2013-01-01
This book presents the first computer program automating the task of componential analysis of kinship vocabularies. The book examines the program in relation to two basic problems: the commonly occurring inconsistency of componential models; and the huge number of alternative componential models.
Structural mode significance using INCA. [Interactive Controls Analysis computer program
Bauer, Frank H.; Downing, John P.; Thorpe, Christopher J.
1990-01-01
Structural finite element models are often too large to be used in the design and analysis of control systems. Model reduction techniques must be applied to reduce the structural model to manageable size. In the past, engineers either performed the model order reduction by hand or used distinct computer programs to retrieve the data, to perform the significance analysis and to reduce the order of the model. To expedite this process, the latest version of INCA has been expanded to include an interactive graphical structural mode significance and model order reduction capability.
Turbo Pascal Computer Code for PIXE Analysis
International Nuclear Information System (INIS)
Darsono
2002-01-01
To optimal utilization of the 150 kV ion accelerator facilities and to govern the analysis technique using ion accelerator, the research and development of low energy PIXE technology has been done. The R and D for hardware of the low energy PIXE installation in P3TM have been carried on since year 2000. To support the R and D of PIXE accelerator facilities in harmonize with the R and D of the PIXE hardware, the development of PIXE software for analysis is also needed. The development of database of PIXE software for analysis using turbo Pascal computer code is reported in this paper. This computer code computes the ionization cross-section, the fluorescence yield, and the stopping power of elements also it computes the coefficient attenuation of X- rays energy. The computer code is named PIXEDASIS and it is part of big computer code planed for PIXE analysis that will be constructed in the near future. PIXEDASIS is designed to be communicative with the user. It has the input from the keyboard. The output shows in the PC monitor, which also can be printed. The performance test of the PIXEDASIS shows that it can be operated well and it can provide data agreement with data form other literatures. (author)
Global Stability of an Epidemic Model of Computer Virus
Directory of Open Access Journals (Sweden)
Xiaofan Yang
2014-01-01
Full Text Available With the rapid popularization of the Internet, computers can enter or leave the Internet increasingly frequently. In fact, no antivirus software can detect and remove all sorts of computer viruses. This implies that viruses would persist on the Internet. To better understand the spread of computer viruses in these situations, a new propagation model is established and analyzed. The unique equilibrium of the model is globally asymptotically stable, in accordance with the reality. A parameter analysis of the equilibrium is also conducted.
Affect and Learning : a computational analysis
Broekens, Douwe Joost
2007-01-01
In this thesis we have studied the influence of emotion on learning. We have used computational modelling techniques to do so, more specifically, the reinforcement learning paradigm. Emotion is modelled as artificial affect, a measure that denotes the positiveness versus negativeness of a situation
Adrian Ioana; Tiberiu Socaciu
2013-01-01
The article presents specific aspects of management and models for economic analysis. Thus, we present the main types of economic analysis: statistical analysis, dynamic analysis, static analysis, mathematical analysis, psychological analysis. Also we present the main object of the analysis: the technological activity analysis of a company, the analysis of the production costs, the economic activity analysis of a company, the analysis of equipment, the analysis of labor productivity, the anal...
Directory of Open Access Journals (Sweden)
Ingrid Różyło-Kalinowska
2014-11-01
Full Text Available Cone-beam computed tomography (CBCT is a relatively new, but highly efficient imaging method applied first in dentistry in 1998. However, the quality of the obtained slices depends among other things on artifacts generated by dental restorations as well as orthodontic and prosthetic appliances. The aim of the study was to quantify the artifacts produced by standard prosthetic inlays in CBCT images. The material consisted of 17 standard prosthetic inlays mounted in dental roots embedded in resin. The samples were examined by means of a large field of view CBCT unit, Galileos (Sirona, Germany, at 85 kV and 14 mAs. The analysis was performed using Able 3DDoctor software for data in the CT raster space as well as by means of Materialise Magics software for generated vector models (STL. The masks generated in the raster space included the area of the inlays together with image artifacts. The region of interest (ROI of the raster space is a set of voxels from a selected range of Hounsfield units (109-3071. Ceramic inlay with zirconium dioxide (Cera Post as well as epoxy resin inlay including silica fibers enriched with zirconium (Easy Post produced the most intense artifacts. The smallest image distortions were created by titanium inlays, both passive (Harald Nordin and active (Flexi Flange. Inlays containing zirconium generated the strongest artifacts, thus leading to the greatest distortions in the CBCT images. Carbon fiber inlay did not considerably affect the image quality.
Computational models of airway branching morphogenesis.
Varner, Victor D; Nelson, Celeste M
2017-07-01
The bronchial network of the mammalian lung consists of millions of dichotomous branches arranged in a highly complex, space-filling tree. Recent computational models of branching morphogenesis in the lung have helped uncover the biological mechanisms that construct this ramified architecture. In this review, we focus on three different theoretical approaches - geometric modeling, reaction-diffusion modeling, and continuum mechanical modeling - and discuss how, taken together, these models have identified the geometric principles necessary to build an efficient bronchial network, as well as the patterning mechanisms that specify airway geometry in the developing embryo. We emphasize models that are integrated with biological experiments and suggest how recent progress in computational modeling has advanced our understanding of airway branching morphogenesis. Copyright © 2016 Elsevier Ltd. All rights reserved.
Computational multiscale modeling of intergranular cracking
International Nuclear Information System (INIS)
Simonovski, Igor; Cizelj, Leon
2011-01-01
A novel computational approach for simulation of intergranular cracks in a polycrystalline aggregate is proposed in this paper. The computational model includes a topological model of the experimentally determined microstructure of a 400 μm diameter stainless steel wire and automatic finite element discretization of the grains and grain boundaries. The microstructure was spatially characterized by X-ray diffraction contrast tomography and contains 362 grains and some 1600 grain boundaries. Available constitutive models currently include isotropic elasticity for the grain interior and cohesive behavior with damage for the grain boundaries. The experimentally determined lattice orientations are employed to distinguish between resistant low energy and susceptible high energy grain boundaries in the model. The feasibility and performance of the proposed computational approach is demonstrated by simulating the onset and propagation of intergranular cracking. The preliminary numerical results are outlined and discussed.
Modeling multimodal human-computer interaction
Obrenovic, Z.; Starcevic, D.
2004-01-01
Incorporating the well-known Unified Modeling Language into a generic modeling framework makes research on multimodal human-computer interaction accessible to a wide range off software engineers. Multimodal interaction is part of everyday human discourse: We speak, move, gesture, and shift our gaze
A Computational Model of Selection by Consequences
McDowell, J. J.
2004-01-01
Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…
Generating Computational Models for Serious Gaming
Westera, Wim
2018-01-01
Many serious games include computational models that simulate dynamic systems. These models promote enhanced interaction and responsiveness. Under the social web paradigm more and more usable game authoring tools become available that enable prosumers to create their own games, but the inclusion of
An integrative computational modelling of music structure apprehension
DEFF Research Database (Denmark)
Lartillot, Olivier
2014-01-01
, the computational model, by virtue of its generality, extensiveness and operationality, is suggested as a blueprint for the establishment of cognitively validated model of music structure apprehension. Available as a Matlab module, it can be used for practical musicological uses.......An objectivization of music analysis requires a detailed formalization of the underlying principles and methods. The formalization of the most elementary structural processes is hindered by the complexity of music, both in terms of profusions of entities (such as notes) and of tight interactions...... between a large number of dimensions. Computational modeling would enable systematic and exhaustive tests on sizeable pieces of music, yet current researches cover particular musical dimensions with limited success. The aim of this research is to conceive a computational modeling of music analysis...
Security Management Model in Cloud Computing Environment
Ahmadpanah, Seyed Hossein
2016-01-01
In the cloud computing environment, cloud virtual machine (VM) will be more and more the number of virtual machine security and management faced giant Challenge. In order to address security issues cloud computing virtualization environment, this paper presents a virtual machine based on efficient and dynamic deployment VM security management model state migration and scheduling, study of which virtual machine security architecture, based on AHP (Analytic Hierarchy Process) virtual machine de...
Ewe: a computer model for ultrasonic inspection
International Nuclear Information System (INIS)
Douglas, S.R.; Chaplin, K.R.
1991-11-01
The computer program EWE simulates the propagation of elastic waves in solids and liquids. It has been applied to ultrasonic testing to study the echoes generated by cracks and other types of defects. A discussion of the elastic wave equations is given, including the first-order formulation, shear and compression waves, surface waves and boundaries, numerical method of solution, models for cracks and slot defects, input wave generation, returning echo construction, and general computer issues
Light reflection models for computer graphics.
Greenberg, D P
1989-04-14
During the past 20 years, computer graphic techniques for simulating the reflection of light have progressed so that today images of photorealistic quality can be produced. Early algorithms considered direct lighting only, but global illumination phenomena with indirect lighting, surface interreflections, and shadows can now be modeled with ray tracing, radiosity, and Monte Carlo simulations. This article describes the historical development of computer graphic algorithms for light reflection and pictorially illustrates what will be commonly available in the near future.
Finite difference computing with exponential decay models
Langtangen, Hans Petter
2016-01-01
This text provides a very simple, initial introduction to the complete scientific computing pipeline: models, discretization, algorithms, programming, verification, and visualization. The pedagogical strategy is to use one case study – an ordinary differential equation describing exponential decay processes – to illustrate fundamental concepts in mathematics and computer science. The book is easy to read and only requires a command of one-variable calculus and some very basic knowledge about computer programming. Contrary to similar texts on numerical methods and programming, this text has a much stronger focus on implementation and teaches testing and software engineering in particular. .
Random matrix model of adiabatic quantum computing
International Nuclear Information System (INIS)
Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.
2005-01-01
We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size
Do's and Don'ts of Computer Models for Planning
Hammond, John S., III
1974-01-01
Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)
ASTEC: Controls analysis for personal computers
Downing, John P.; Bauer, Frank H.; Thorpe, Christopher J.
1989-01-01
The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. The project is a follow-on to the INCA (INteractive Controls Analysis) program that has been developed at GSFC over the past five years. While ASTEC makes use of the algorithms and expertise developed for the INCA program, the user interface was redesigned to take advantage of the capabilities of the personal computer. The design philosophy and the current capabilities of the ASTEC software are described.
Quantum Vertex Model for Reversible Classical Computing
Chamon, Claudio; Mucciolo, Eduardo; Ruckenstein, Andrei; Yang, Zhicheng
We present a planar vertex model that encodes the result of a universal reversible classical computation in its ground state. The approach involves Boolean variables (spins) placed on links of a two-dimensional lattice, with vertices representing logic gates. Large short-ranged interactions between at most two spins implement the operation of each gate. The lattice is anisotropic with one direction corresponding to computational time, and with transverse boundaries storing the computation's input and output. The model displays no finite temperature phase transitions, including no glass transitions, independent of circuit. The computational complexity is encoded in the scaling of the relaxation rate into the ground state with the system size. We use thermal annealing and a novel and more efficient heuristic \\x9Dannealing with learning to study various computational problems. To explore faster relaxation routes, we construct an explicit mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating a novel approach to reversible classical computation based on quantum annealing.
Modeling with data tools and techniques for scientific computing
Klemens, Ben
2009-01-01
Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods
Introducing remarks upon the analysis of computer systems performance
International Nuclear Information System (INIS)
Baum, D.
1980-05-01
Some of the basis ideas of analytical techniques to study the behaviour of computer systems are presented. Single systems as well as networks of computers are viewed as stochastic dynamical systems which may be modelled by queueing networks. Therefore this report primarily serves as an introduction to probabilistic methods for qualitative analysis of systems. It is supplemented by an application example of Chandy's collapsing method. (orig.) [de
Strategic Analysis of Autodesk and the Move to Cloud Computing
Kewley, Kathleen
2012-01-01
This paper provides an analysis of the opportunity for Autodesk to move its core technology to a cloud delivery model. Cloud computing offers clients a number of advantages, such as lower costs for computer hardware, increased access to technology and greater flexibility. With the IT industry embracing this transition, software companies need to plan for future change and lead with innovative solutions. Autodesk is in a unique position to capitalize on this market shift, as it is the leader i...
A computer program for activation analysis
International Nuclear Information System (INIS)
Rantanen, J.; Rosenberg, R.J.
1983-01-01
A computer program for calculating the results of activation analysis is described. The program comprises two gamma spectrum analysis programs, STOAV and SAMPO and one program for calculating elemental concentrations, KVANT. STOAV is based on a simple summation of channels and SAMPO is based on fitting of mathematical functions. The programs are tested by analyzing the IAEA G-1 test spectra. In the determination of peak location SAMPO is somewhat better than STOAV and in the determination of peak area SAMPO is more than twice as accurate as STOAV. On the other hand, SAMPO is three times as expensive as STOAV with the use of a Cyber 170 computer. (author)
Computational disease modeling – fact or fiction?
Directory of Open Access Journals (Sweden)
Stephan Klaas
2009-06-01
Full Text Available Abstract Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems.
Computational modeling of turn-taking dynamics in spoken conversations
Chowdhury, Shammur Absar
2017-01-01
The study of human interaction dynamics has been at the center for multiple research disciplines in- cluding computer and social sciences, conversational analysis and psychology, for over decades. Recent interest has been shown with the aim of designing computational models to improve human-machine interaction system as well as support humans in their decision-making process. Turn-taking is one of the key aspects of conversational dynamics in dyadic conversations and is an integral part of hu...
Convergence Analysis of a Class of Computational Intelligence Approaches
Directory of Open Access Journals (Sweden)
Junfeng Chen
2013-01-01
Full Text Available Computational intelligence approaches is a relatively new interdisciplinary field of research with many promising application areas. Although the computational intelligence approaches have gained huge popularity, it is difficult to analyze the convergence. In this paper, a computational model is built up for a class of computational intelligence approaches represented by the canonical forms of generic algorithms, ant colony optimization, and particle swarm optimization in order to describe the common features of these algorithms. And then, two quantification indices, that is, the variation rate and the progress rate, are defined, respectively, to indicate the variety and the optimality of the solution sets generated in the search process of the model. Moreover, we give four types of probabilistic convergence for the solution set updating sequences, and their relations are discussed. Finally, the sufficient conditions are derived for the almost sure weak convergence and the almost sure strong convergence of the model by introducing the martingale theory into the Markov chain analysis.
Conference “Computational Analysis and Optimization” (CAO 2011)
Tiihonen, Timo; Tuovinen, Tero; Numerical Methods for Differential Equations, Optimization, and Technological Problems : Dedicated to Professor P. Neittaanmäki on His 60th Birthday
2013-01-01
This book contains the results in numerical analysis and optimization presented at the ECCOMAS thematic conference “Computational Analysis and Optimization” (CAO 2011) held in Jyväskylä, Finland, June 9–11, 2011. Both the conference and this volume are dedicated to Professor Pekka Neittaanmäki on the occasion of his sixtieth birthday. It consists of five parts that are closely related to his scientific activities and interests: Numerical Methods for Nonlinear Problems; Reliable Methods for Computer Simulation; Analysis of Noised and Uncertain Data; Optimization Methods; Mathematical Models Generated by Modern Technological Problems. The book also includes a short biography of Professor Neittaanmäki.
Enabling Grid Computing resources within the KM3NeT computing model
Directory of Open Access Journals (Sweden)
Filippidis Christos
2016-01-01
Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.
Towards The Deep Model : Understanding Visual Recognition Through Computational Models
Wang, Panqu
2017-01-01
Understanding how visual recognition is achieved in the human brain is one of the most fundamental questions in vision research. In this thesis I seek to tackle this problem from a neurocomputational modeling perspective. More specifically, I build machine learning-based models to simulate and explain cognitive phenomena related to human visual recognition, and I improve computational models using brain-inspired principles to excel at computer vision tasks.I first describe how a neurocomputat...
Safety analysis of control rod drive computers
International Nuclear Information System (INIS)
Ehrenberger, W.; Rauch, G.; Schmeil, U.; Maertz, J.; Mainka, E.U.; Nordland, O.; Gloee, G.
1985-01-01
The analysis of the most significant user programmes revealed no errors in these programmes. The evaluation of approximately 82 cumulated years of operation demonstrated that the operating system of the control rod positioning processor has a reliability that is sufficiently good for the tasks this computer has to fulfil. Computers can be used for safety relevant tasks. The experience gained with the control rod positioning processor confirms that computers are not less reliable than conventional instrumentation and control system for comparable tasks. The examination and evaluation of computers for safety relevant tasks can be done with programme analysis or statistical evaluation of the operating experience. Programme analysis is recommended for seldom used and well structured programmes. For programmes with a long, cumulated operating time a statistical evaluation is more advisable. The effort for examination and evaluation is not greater than the corresponding effort for conventional instrumentation and control systems. This project has also revealed that, where it is technologically sensible, process controlling computers or microprocessors can be qualified for safety relevant tasks without undue effort. (orig./HP) [de
The role of computer modelling in participatory integrated assessments
International Nuclear Information System (INIS)
Siebenhuener, Bernd; Barth, Volker
2005-01-01
In a number of recent research projects, computer models have been included in participatory procedures to assess global environmental change. The intention was to support knowledge production and to help the involved non-scientists to develop a deeper understanding of the interactions between natural and social systems. This paper analyses the experiences made in three projects with the use of computer models from a participatory and a risk management perspective. Our cross-cutting analysis of the objectives, the employed project designs and moderation schemes and the observed learning processes in participatory processes with model use shows that models play a mixed role in informing participants and stimulating discussions. However, no deeper reflection on values and belief systems could be achieved. In terms of the risk management phases, computer models serve best the purposes of problem definition and option assessment within participatory integrated assessment (PIA) processes
Hybrid computer modelling in plasma physics
International Nuclear Information System (INIS)
Hromadka, J; Ibehej, T; Hrach, R
2016-01-01
Our contribution is devoted to development of hybrid modelling techniques. We investigate sheath structures in the vicinity of solids immersed in low temperature argon plasma of different pressures by means of particle and fluid computer models. We discuss the differences in results obtained by these methods and try to propose a way to improve the results of fluid models in the low pressure area. There is a possibility to employ Chapman-Enskog method to find appropriate closure relations of fluid equations in a case when particle distribution function is not Maxwellian. We try to follow this way to enhance fluid model and to use it in hybrid plasma model further. (paper)
Directory of Open Access Journals (Sweden)
Gabrielle Stetz
2017-01-01
Full Text Available Allosteric interactions in the Hsp70 proteins are linked with their regulatory mechanisms and cellular functions. Despite significant progress in structural and functional characterization of the Hsp70 proteins fundamental questions concerning modularity of the allosteric interaction networks and hierarchy of signaling pathways in the Hsp70 chaperones remained largely unexplored and poorly understood. In this work, we proposed an integrated computational strategy that combined atomistic and coarse-grained simulations with coevolutionary analysis and network modeling of the residue interactions. A novel aspect of this work is the incorporation of dynamic residue correlations and coevolutionary residue dependencies in the construction of allosteric interaction networks and signaling pathways. We found that functional sites involved in allosteric regulation of Hsp70 may be characterized by structural stability, proximity to global hinge centers and local structural environment that is enriched by highly coevolving flexible residues. These specific characteristics may be necessary for regulation of allosteric structural transitions and could distinguish regulatory sites from nonfunctional conserved residues. The observed confluence of dynamics correlations and coevolutionary residue couplings with global networking features may determine modular organization of allosteric interactions and dictate localization of key mediating sites. Community analysis of the residue interaction networks revealed that concerted rearrangements of local interacting modules at the inter-domain interface may be responsible for global structural changes and a population shift in the DnaK chaperone. The inter-domain communities in the Hsp70 structures harbor the majority of regulatory residues involved in allosteric signaling, suggesting that these sites could be integral to the network organization and coordination of structural changes. Using a network-based formalism of
Sentiment analysis and ontology engineering an environment of computational intelligence
Chen, Shyi-Ming
2016-01-01
This edited volume provides the reader with a fully updated, in-depth treatise on the emerging principles, conceptual underpinnings, algorithms and practice of Computational Intelligence in the realization of concepts and implementation of models of sentiment analysis and ontology –oriented engineering. The volume involves studies devoted to key issues of sentiment analysis, sentiment models, and ontology engineering. The book is structured into three main parts. The first part offers a comprehensive and prudently structured exposure to the fundamentals of sentiment analysis and natural language processing. The second part consists of studies devoted to the concepts, methodologies, and algorithmic developments elaborating on fuzzy linguistic aggregation to emotion analysis, carrying out interpretability of computational sentiment models, emotion classification, sentiment-oriented information retrieval, a methodology of adaptive dynamics in knowledge acquisition. The third part includes a plethora of applica...
Surface computing and collaborative analysis work
Brown, Judith; Gossage, Stevenson; Hack, Chris
2013-01-01
Large surface computing devices (wall-mounted or tabletop) with touch interfaces and their application to collaborative data analysis, an increasingly important and prevalent activity, is the primary topic of this book. Our goals are to outline the fundamentals of surface computing (a still maturing technology), review relevant work on collaborative data analysis, describe frameworks for understanding collaborative processes, and provide a better understanding of the opportunities for research and development. We describe surfaces as display technologies with which people can interact directly, and emphasize how interaction design changes when designing for large surfaces. We review efforts to use large displays, surfaces or mixed display environments to enable collaborative analytic activity. Collaborative analysis is important in many domains, but to provide concrete examples and a specific focus, we frequently consider analysis work in the security domain, and in particular the challenges security personne...
Computer-assisted qualitative data analysis software.
Cope, Diane G
2014-05-01
Advances in technology have provided new approaches for data collection methods and analysis for researchers. Data collection is no longer limited to paper-and-pencil format, and numerous methods are now available through Internet and electronic resources. With these techniques, researchers are not burdened with entering data manually and data analysis is facilitated by software programs. Quantitative research is supported by the use of computer software and provides ease in the management of large data sets and rapid analysis of numeric statistical methods. New technologies are emerging to support qualitative research with the availability of computer-assisted qualitative data analysis software (CAQDAS).CAQDAS will be presented with a discussion of advantages, limitations, controversial issues, and recommendations for this type of software use.
Spatial analysis statistics, visualization, and computational methods
Oyana, Tonny J
2015-01-01
An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...
Computational intelligence applications in modeling and control
Vaidyanathan, Sundarapandian
2015-01-01
The development of computational intelligence (CI) systems was inspired by observable and imitable aspects of intelligent activity of human being and nature. The essence of the systems based on computational intelligence is to process and interpret data of various nature so that that CI is strictly connected with the increase of available data as well as capabilities of their processing, mutually supportive factors. Developed theories of computational intelligence were quickly applied in many fields of engineering, data analysis, forecasting, biomedicine and others. They are used in images and sounds processing and identifying, signals processing, multidimensional data visualization, steering of objects, analysis of lexicographic data, requesting systems in banking, diagnostic systems, expert systems and many other practical implementations. This book consists of 16 contributed chapters by subject experts who are specialized in the various topics addressed in this book. The special chapters have been brought ...
Computation for the analysis of designed experiments
Heiberger, Richard
2015-01-01
Addresses the statistical, mathematical, and computational aspects of the construction of packages and analysis of variance (ANOVA) programs. Includes a disk at the back of the book that contains all program codes in four languages, APL, BASIC, C, and FORTRAN. Presents illustrations of the dual space geometry for all designs, including confounded designs.
Computational Analysis of SAXS Data Acquisition.
Dong, Hui; Kim, Jin Seob; Chirikjian, Gregory S
2015-09-01
Small-angle x-ray scattering (SAXS) is an experimental biophysical method used for gaining insight into the structure of large biomolecular complexes. Under appropriate chemical conditions, the information obtained from a SAXS experiment can be equated to the pair distribution function, which is the distribution of distances between every pair of points in the complex. Here we develop a mathematical model to calculate the pair distribution function for a structure of known density, and analyze the computational complexity of these calculations. Efficient recursive computation of this forward model is an important step in solving the inverse problem of recovering the three-dimensional density of biomolecular structures from their pair distribution functions. In particular, we show that integrals of products of three spherical-Bessel functions arise naturally in this context. We then develop an algorithm for the efficient recursive computation of these integrals.
Computational analysis of ozonation in bubble columns
International Nuclear Information System (INIS)
Quinones-Bolanos, E.; Zhou, H.; Otten, L.
2002-01-01
This paper presents a new computational ozonation model based on the principle of computational fluid dynamics along with the kinetics of ozone decay and microbial inactivation to predict the performance of ozone disinfection in fine bubble columns. The model can be represented using a mixture two-phase flow model to simulate the hydrodynamics of the water flow and using two transport equations to track the concentration profiles of ozone and microorganisms along the height of the column, respectively. The applicability of this model was then demonstrated by comparing the simulated ozone concentrations with experimental measurements obtained from a pilot scale fine bubble column. One distinct advantage of this approach is that it does not require the prerequisite assumptions such as plug flow condition, perfect mixing, tanks-in-series, uniform radial or longitudinal dispersion in predicting the performance of disinfection contactors without carrying out expensive and tedious tracer studies. (author)
Analysis and computation of microstructure in finite plasticity
Hackl, Klaus
2015-01-01
This book addresses the need for a fundamental understanding of the physical origin, the mathematical behavior, and the numerical treatment of models which include microstructure. Leading scientists present their efforts involving mathematical analysis, numerical analysis, computational mechanics, material modelling and experiment. The mathematical analyses are based on methods from the calculus of variations, while in the numerical implementation global optimization algorithms play a central role. The modeling covers all length scales, from the atomic structure up to macroscopic samples. The development of the models ware guided by experiments on single and polycrystals, and results will be checked against experimental data.
Applied Mathematics, Modelling and Computational Science
Kotsireas, Ilias; Makarov, Roman; Melnik, Roderick; Shodiev, Hasan
2015-01-01
The Applied Mathematics, Modelling, and Computational Science (AMMCS) conference aims to promote interdisciplinary research and collaboration. The contributions in this volume cover the latest research in mathematical and computational sciences, modeling, and simulation as well as their applications in natural and social sciences, engineering and technology, industry, and finance. The 2013 conference, the second in a series of AMMCS meetings, was held August 26–30 and organized in cooperation with AIMS and SIAM, with support from the Fields Institute in Toronto, and Wilfrid Laurier University. There were many young scientists at AMMCS-2013, both as presenters and as organizers. This proceedings contains refereed papers contributed by the participants of the AMMCS-2013 after the conference. This volume is suitable for researchers and graduate students, mathematicians and engineers, industrialists, and anyone who would like to delve into the interdisciplinary research of applied and computational mathematics ...
Methodology for characterizing modeling and discretization uncertainties in computational simulation
Energy Technology Data Exchange (ETDEWEB)
ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.
2000-03-01
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
Modeling inputs to computer models used in risk assessment
International Nuclear Information System (INIS)
Iman, R.L.
1987-01-01
Computer models for various risk assessment applications are closely scrutinized both from the standpoint of questioning the correctness of the underlying mathematical model with respect to the process it is attempting to model and from the standpoint of verifying that the computer model correctly implements the underlying mathematical model. A process that receives less scrutiny, but is nonetheless of equal importance, concerns the individual and joint modeling of the inputs. This modeling effort clearly has a great impact on the credibility of results. Model characteristics are reviewed in this paper that have a direct bearing on the model input process and reasons are given for using probabilities-based modeling with the inputs. The authors also present ways to model distributions for individual inputs and multivariate input structures when dependence and other constraints may be present
Integrating interactive computational modeling in biology curricula.
Directory of Open Access Journals (Sweden)
Tomáš Helikar
2015-03-01
Full Text Available While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.
Integrating interactive computational modeling in biology curricula.
Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A
2015-03-01
While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.
Computer Modelling of Photochemical Smog Formation
Huebert, Barry J.
1974-01-01
Discusses a computer program that has been used in environmental chemistry courses as an example of modelling as a vehicle for teaching chemical dynamics, and as a demonstration of some of the factors which affect the production of smog. (Author/GS)
A Computational Model of Fraction Arithmetic
Braithwaite, David W.; Pyke, Aryn A.; Siegler, Robert S.
2017-01-01
Many children fail to master fraction arithmetic even after years of instruction, a failure that hinders their learning of more advanced mathematics as well as their occupational success. To test hypotheses about why children have so many difficulties in this area, we created a computational model of fraction arithmetic learning and presented it…
Model Checking - Automated Verification of Computational Systems
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 14; Issue 7. Model Checking - Automated Verification of Computational Systems. Madhavan Mukund. General Article Volume 14 Issue 7 July 2009 pp 667-681. Fulltext. Click here to view fulltext PDF. Permanent link:
Computational Modeling of Complex Protein Activity Networks
Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude
2017-01-01
Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a
Computer Modeling of Platinum Reforming Reactors | Momoh ...
African Journals Online (AJOL)
This paper, instead of using a theoretical approach has considered a computer model as means of assessing the reformate composition for three-stage fixed bed reactors in platforming unit. This is done by identifying many possible hydrocarbon transformation reactions that are peculiar to the process unit, identify the ...
Particle modeling of plasmas computational plasma physics
International Nuclear Information System (INIS)
Dawson, J.M.
1991-01-01
Recently, through the development of supercomputers, a powerful new method for exploring plasmas has emerged; it is computer modeling of plasmas. Such modeling can duplicate many of the complex processes that go on in a plasma and allow scientists to understand what the important processes are. It helps scientists gain an intuition about this complex state of matter. It allows scientists and engineers to explore new ideas on how to use plasma before building costly experiments; it allows them to determine if they are on the right track. It can duplicate the operation of devices and thus reduce the need to build complex and expensive devices for research and development. This is an exciting new endeavor that is in its infancy, but which can play an important role in the scientific and technological competitiveness of the US. There are a wide range of plasma models that are in use. There are particle models, fluid models, hybrid particle fluid models. These can come in many forms, such as explicit models, implicit models, reduced dimensional models, electrostatic models, magnetostatic models, electromagnetic models, and almost an endless variety of other models. Here the author will only discuss particle models. He will give a few examples of the use of such models; these will be taken from work done by the Plasma Modeling Group at UCLA because he is most familiar with work. However, it only gives a small view of the wide range of work being done around the US, or for that matter around the world
Reproducibility in Computational Neuroscience Models and Simulations
McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.
2016-01-01
Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845
Applied modelling and computing in social science
Povh, Janez
2015-01-01
In social science outstanding results are yielded by advanced simulation methods, based on state of the art software technologies and an appropriate combination of qualitative and quantitative methods. This book presents examples of successful applications of modelling and computing in social science: business and logistic process simulation and optimization, deeper knowledge extractions from big data, better understanding and predicting of social behaviour and modelling health and environment changes.
Validation of a phytoremediation computer model
Energy Technology Data Exchange (ETDEWEB)
Corapcioglu, M Y; Sung, K; Rhykerd, R L; Munster, C; Drew, M [Texas A and M Univ., College Station, TX (United States)
1999-01-01
The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg[sub -1
Directory of Open Access Journals (Sweden)
João C. F. Borges Júnior
2008-09-01
Full Text Available Techniques of evaluation of risks coming from inherent uncertainties to the agricultural activity should accompany planning studies. The risk analysis should be carried out by risk simulation using techniques as the Monte Carlo method. This study was carried out to develop a computer program so-called P-RISCO for the application of risky simulations on linear programming models, to apply to a case study, as well to test the results comparatively to the @RISK program. In the risk analysis it was observed that the average of the output variable total net present value, U, was considerably lower than the maximum U value obtained from the linear programming model. It was also verified that the enterprise will be front to expressive risk of shortage of water in the month of April, what doesn't happen for the cropping pattern obtained by the minimization of the irrigation requirement in the months of April in the four years. The scenario analysis indicated that the sale price of the passion fruit crop exercises expressive influence on the financial performance of the enterprise. In the comparative analysis it was verified the equivalence of P-RISCO and @RISK programs in the execution of the risk simulation for the considered scenario.Técnicas de avaliação de riscos procedentes de incertezas inerentes à atividade agrícola devem acompanhar os estudos de planejamento. A análise de risco pode ser desempenhada por meio de simulação, utilizando técnicas como o método de Monte Carlo. Neste trabalho, teve-se o objetivo de desenvolver um programa computacional, denominado P-RISCO, para utilização de simulações de risco em modelos de programação linear, aplicar a um estudo de caso e testar os resultados comparativamente ao programa @RISK. Na análise de risco, observou-se que a média da variável de saída, valor presente líquido total (U, foi consideravelmente inferior ao valor máximo de U obtido no modelo de programação linear. Constatou
A distributed computing model for telemetry data processing
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-05-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
A distributed computing model for telemetry data processing
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-01-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing
Energy Technology Data Exchange (ETDEWEB)
Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohr, Bernd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pasccci, Valerio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunst, Holger [Dresden Univ. of Technology (Germany)
2015-07-29
The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.
Computation system for nuclear reactor core analysis
International Nuclear Information System (INIS)
Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.; Petrie, L.M.
1977-04-01
This report documents a system which contains computer codes as modules developed to evaluate nuclear reactor core performance. The diffusion theory approximation to neutron transport may be applied with the VENTURE code treating up to three dimensions. The effect of exposure may be determined with the BURNER code, allowing depletion calculations to be made. The features and requirements of the system are discussed and aspects common to the computational modules, but the latter are documented elsewhere. User input data requirements, data file management, control, and the modules which perform general functions are described. Continuing development and implementation effort is enhancing the analysis capability available locally and to other installations from remote terminals
Computer-aided power systems analysis
Kusic, George
2008-01-01
Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti
Energy Technology Data Exchange (ETDEWEB)
Joseph, Earl C. [IDC Research Inc., Framingham, MA (United States); Conway, Steve [IDC Research Inc., Framingham, MA (United States); Dekate, Chirag [IDC Research Inc., Framingham, MA (United States)
2013-09-30
This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size. A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.
Computational Aspects of Dam Risk Analysis: Findings and Challenges
Directory of Open Access Journals (Sweden)
Ignacio Escuder-Bueno
2016-09-01
Full Text Available In recent years, risk analysis techniques have proved to be a useful tool to inform dam safety management. This paper summarizes the outcomes of three themes related to dam risk analysis discussed in the Benchmark Workshops organized by the International Commission on Large Dams Technical Committee on “Computational Aspects of Analysis and Design of Dams.” In the 2011 Benchmark Workshop, estimation of the probability of failure of a gravity dam for the sliding failure mode was discussed. Next, in 2013, the discussion focused on the computational challenges of the estimation of consequences in dam risk analysis. Finally, in 2015, the probability of sliding and overtopping in an embankment was analyzed. These Benchmark Workshops have allowed a complete review of numerical aspects for dam risk analysis, showing that risk analysis methods are a very useful tool to analyze the risk of dam systems, including downstream consequence assessments and the uncertainty of structural models.
Grid computing in large pharmaceutical molecular modeling.
Claus, Brian L; Johnson, Stephen R
2008-07-01
Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.
Attacker Modelling in Ubiquitous Computing Systems
DEFF Research Database (Denmark)
Papini, Davide
in with our everyday life. This future is visible to everyone nowadays: terms like smartphone, cloud, sensor, network etc. are widely known and used in our everyday life. But what about the security of such systems. Ubiquitous computing devices can be limited in terms of energy, computing power and memory...... attacker remain somehow undened and still under extensive investigation. This Thesis explores the nature of the ubiquitous attacker with a focus on how she interacts with the physical world and it denes a model that captures the abilities of the attacker. Furthermore a quantitative implementation...
40 CFR 194.23 - Models and computer codes.
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
Cardioplegia heat exchanger design modelling using computational fluid dynamics.
van Driel, M R
2000-11-01
A new cardioplegia heat exchanger has been developed by Sorin Biomedica. A three-dimensional computer-aided design (CAD) model was optimized using computational fluid dynamics (CFD) modelling. CFD optimization techniques have commonly been applied to velocity flow field analysis, but CFD analysis was also used in this study to predict the heat exchange performance of the design before prototype fabrication. The iterative results of the optimization and the actual heat exchange performance of the final configuration are presented in this paper. Based on the behaviour of this model, both the water and blood fluid flow paths of the heat exchanger were optimized. The simulation predicted superior heat exchange performance using an optimal amount of energy exchange surface area, reducing the total contact surface area, the device priming volume and the material costs. Experimental results confirm the empirical results predicted by the CFD analysis.
Practical computer analysis of switch mode power supplies
Bennett, Johnny C
2006-01-01
When designing switch-mode power supplies (SMPSs), engineers need much more than simple "recipes" for analysis. Such plug-and-go instructions are not at all helpful for simulating larger and more complex circuits and systems. Offering more than merely a "cookbook," Practical Computer Analysis of Switch Mode Power Supplies provides a thorough understanding of the essential requirements for analyzing SMPS performance characteristics. It demonstrates the power of the circuit averaging technique when used with powerful computer circuit simulation programs. The book begins with SMPS fundamentals and the basics of circuit averaging models, reviewing most basic topologies and explaining all of their various modes of operation and control. The author then discusses the general analysis requirements of power supplies and how to develop the general types of SMPS models, demonstrating the use of SPICE for analysis. He examines the basic first-order analyses generally associated with SMPS performance along with more pra...
Computer-Aided Modeling of Lipid Processing Technology
DEFF Research Database (Denmark)
Diaz Tovar, Carlos Axel
2011-01-01
increase along with growing interest in biofuels, the oleochemical industry faces in the upcoming years major challenges in terms of design and development of better products and more sustainable processes to make them. Computer-aided methods and tools for process synthesis, modeling and simulation...... are widely used for design, analysis, and optimization of processes in the chemical and petrochemical industries. These computer-aided tools have helped the chemical industry to evolve beyond commodities toward specialty chemicals and ‘consumer oriented chemicals based products’. Unfortunately...... to develop systematic computer-aided methods (property models) and tools (database) related to the prediction of the necessary physical properties suitable for design and analysis of processes employing lipid technologies. The methods and tools include: the development of a lipid-database (CAPEC...
Computational Aerodynamic Modeling of Small Quadcopter Vehicles
Yoon, Seokkwan; Ventura Diaz, Patricia; Boyd, D. Douglas; Chan, William M.; Theodore, Colin R.
2017-01-01
High-fidelity computational simulations have been performed which focus on rotor-fuselage and rotor-rotor aerodynamic interactions of small quad-rotor vehicle systems. The three-dimensional unsteady Navier-Stokes equations are solved on overset grids using high-order accurate schemes, dual-time stepping, low Mach number preconditioning, and hybrid turbulence modeling. Computational results for isolated rotors are shown to compare well with available experimental data. Computational results in hover reveal the differences between a conventional configuration where the rotors are mounted above the fuselage and an unconventional configuration where the rotors are mounted below the fuselage. Complex flow physics in forward flight is investigated. The goal of this work is to demonstrate that understanding of interactional aerodynamics can be an important factor in design decisions regarding rotor and fuselage placement for next-generation multi-rotor drones.
Plasma geometric optics analysis and computation
International Nuclear Information System (INIS)
Smith, T.M.
1983-01-01
Important practical applications in the generation, manipulation, and diagnosis of laboratory thermonuclear plasmas have created a need for elaborate computational capabilities in the study of high frequency wave propagation in plasmas. A reduced description of such waves suitable for digital computation is provided by the theory of plasma geometric optics. The existing theory is beset by a variety of special cases in which the straightforward analytical approach fails, and has been formulated with little attention to problems of numerical implementation of that analysis. The standard field equations are derived for the first time from kinetic theory. A discussion of certain terms previously, and erroneously, omitted from the expansion of the plasma constitutive relation is given. A powerful but little known computational prescription for determining the geometric optics field in the neighborhood of caustic singularities is rigorously developed, and a boundary layer analysis for the asymptotic matching of the plasma geometric optics field across caustic singularities is performed for the first time with considerable generality. A proper treatment of birefringence is detailed, wherein a breakdown of the fundamental perturbation theory is identified and circumvented. A general ray tracing computer code suitable for applications to radiation heating and diagnostic problems is presented and described
Analysis of electronic circuits using digital computers
International Nuclear Information System (INIS)
Tapu, C.
1968-01-01
Various programmes have been proposed for studying electronic circuits with the help of computers. It is shown here how it possible to use the programme ECAP, developed by I.B.M., for studying the behaviour of an operational amplifier from different point of view: direct current, alternating current and transient state analysis, optimisation of the gain in open loop, study of the reliability. (author) [fr
ANS main control complex three-dimensional computer model development
International Nuclear Information System (INIS)
Cleaves, J.E.; Fletcher, W.M.
1993-01-01
A three-dimensional (3-D) computer model of the Advanced Neutron Source (ANS) main control complex is being developed. The main control complex includes the main control room, the technical support center, the materials irradiation control room, computer equipment rooms, communications equipment rooms, cable-spreading rooms, and some support offices and breakroom facilities. The model will be used to provide facility designers and operations personnel with capabilities for fit-up/interference analysis, visual ''walk-throughs'' for optimizing maintain-ability, and human factors and operability analyses. It will be used to determine performance design characteristics, to generate construction drawings, and to integrate control room layout, equipment mounting, grounding equipment, electrical cabling, and utility services into ANS building designs. This paper describes the development of the initial phase of the 3-D computer model for the ANS main control complex and plans for its development and use
A Research Roadmap for Computation-Based Human Reliability Analysis
Energy Technology Data Exchange (ETDEWEB)
Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-08-01
The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.
A Research Roadmap for Computation-Based Human Reliability Analysis
International Nuclear Information System (INIS)
Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina
2015-01-01
The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.
Elian, Nicolas; Bloom, Mitchell; Dard, Michel; Cho, Sang-Choon; Trushkowsky, Richard D; Tarnow, Dennis
2014-02-01
The purpose of this study was to assess the effect of inter-implant distance on interproximal bone utilizing platform switching. Analysis of interproximal bone usually depends on traditional two-dimensional radiographic assessment. Although there has been increased reliability of current techniques, there has been an inability to track bone level changes over time and in three dimensions. Micro-CT has provided three-dimensional imaging that can be used in conjunction with traditional two-dimensional radiographic techniques. This study was performed on 24 female minipigs. Twelve animals received three implants with an inter-implant distance of 3 mm on one side of the mandible and another three implants on the contra-lateral side, where the implants were placed 2 mm apart creating a split mouth design. Twelve other animals received three implants with an inter-implant distance of 3 mm on one side of the mandible and another three implants on the contra-lateral side, where the implants were placed 4 mm apart creating a split mouth design too. The quantitative evaluation was performed comparatively on radiographs taken at t 0 (immediately after implantation) and at t 8 weeks (after termination). The samples were scanned by micro-computed tomography (μCT) to quantify the first bone to implant contact (fBIC) and bone volume/total volume (BV/TV). Mixed model regressions using the nonparametric Brunner-Langer method were used to determine the effect of inter-implant distance on the measured outcomes. The change in bone level was determined using radiography and its mean was 0.05 mm for an inter-implant distance of 3 and 0.00 mm for a 2 mm distance (P = 0.7268). The mean of this outcome was 0.18 mm for the 3 mm and for 4 mm inter-implant distance (P = 0.9500). Micro-computed tomography showed that the fBIC was always located above the reference, 0.27 and 0.20 mm for the comparison of 2-3 mm (P = 0.4622) and 0.49 and 0.34 mm for the inter-implant distance of 3 and 4 mm (P
Computational hemodynamics theory, modelling and applications
Tu, Jiyuan; Wong, Kelvin Kian Loong
2015-01-01
This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system. Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...
Computer model for harmonic ultrasound imaging.
Li, Y; Zagzebski, J A
2000-01-01
Harmonic ultrasound imaging has received great attention from ultrasound scanner manufacturers and researchers. In this paper, we present a computer model that can generate realistic harmonic images. In this model, the incident ultrasound is modeled after the "KZK" equation, and the echo signal is modeled using linear propagation theory because the echo signal is much weaker than the incident pulse. Both time domain and frequency domain numerical solutions to the "KZK" equation were studied. Realistic harmonic images of spherical lesion phantoms were generated for scans by a circular transducer. This model can be a very useful tool for studying the harmonic buildup and dissipation processes in a nonlinear medium, and it can be used to investigate a wide variety of topics related to B-mode harmonic imaging.
Lattice Boltzmann model capable of mesoscopic vorticity computation
Peng, Cheng; Guo, Zhaoli; Wang, Lian-Ping
2017-11-01
It is well known that standard lattice Boltzmann (LB) models allow the strain-rate components to be computed mesoscopically (i.e., through the local particle distributions) and as such possess a second-order accuracy in strain rate. This is one of the appealing features of the lattice Boltzmann method (LBM) which is of only second-order accuracy in hydrodynamic velocity itself. However, no known LB model can provide the same quality for vorticity and pressure gradients. In this paper, we design a multiple-relaxation time LB model on a three-dimensional 27-discrete-velocity (D3Q27) lattice. A detailed Chapman-Enskog analysis is presented to illustrate all the necessary constraints in reproducing the isothermal Navier-Stokes equations. The remaining degrees of freedom are carefully analyzed to derive a model that accommodates mesoscopic computation of all the velocity and pressure gradients from the nonequilibrium moments. This way of vorticity calculation naturally ensures a second-order accuracy, which is also proven through an asymptotic analysis. We thus show, with enough degrees of freedom and appropriate modifications, the mesoscopic vorticity computation can be achieved in LBM. The resulting model is then validated in simulations of a three-dimensional decaying Taylor-Green flow, a lid-driven cavity flow, and a uniform flow passing a fixed sphere. Furthermore, it is shown that the mesoscopic vorticity computation can be realized even with single relaxation parameter.
Computer modelling of superconductive fault current limiters
Energy Technology Data Exchange (ETDEWEB)
Weller, R.A.; Campbell, A.M.; Coombs, T.A.; Cardwell, D.A.; Storey, R.J. [Cambridge Univ. (United Kingdom). Interdisciplinary Research Centre in Superconductivity (IRC); Hancox, J. [Rolls Royce, Applied Science Division, Derby (United Kingdom)
1998-05-01
Investigations are being carried out on the use of superconductors for fault current limiting applications. A number of computer programs are being developed to predict the behavior of different `resistive` fault current limiter designs under a variety of fault conditions. The programs achieve solution by iterative methods based around real measured data rather than theoretical models in order to achieve accuracy at high current densities. (orig.) 5 refs.
Computational fluid dynamics modelling in cardiovascular medicine.
Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P
2016-01-01
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. Published by the BMJ Publishing Group Limited. For permission
Computer code for general analysis of radon risks (GARR)
International Nuclear Information System (INIS)
Ginevan, M.
1984-09-01
This document presents a computer model for general analysis of radon risks that allow the user to specify a large number of possible models with a small number of simple commands. The model is written in a version of BASIC which conforms closely to the American National Standards Institute (ANSI) definition for minimal BASIC and thus is readily modified for use on a wide variety of computers and, in particular, microcomputers. Model capabilities include generation of single-year life tables from 5-year abridged data, calculation of multiple-decrement life tables for lung cancer for the general population, smokers, and nonsmokers, and a cohort lung cancer risk calculation that allows specification of level and duration of radon exposure, the form of the risk model, and the specific population assumed at risk. 36 references, 8 figures, 7 tables
Tolerance analysis through computational imaging simulations
Birch, Gabriel C.; LaCasse, Charles F.; Stubbs, Jaclynn J.; Dagel, Amber L.; Bradley, Jon
2017-11-01
The modeling and simulation of non-traditional imaging systems require holistic consideration of the end-to-end system. We demonstrate this approach through a tolerance analysis of a random scattering lensless imaging system.
Computational Models of Human Organizational Dynamics
National Research Council Canada - National Science Library
Courand, Gregg
2000-01-01
.... ThIs is the final report for our Phase II SBIR project, conducted over three years. Our research program has contributed theory, methodology, and technology for organizational modeling and analysis...
Analytical performance modeling for computer systems
Tay, Y C
2013-01-01
This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti
The deterministic computational modelling of radioactivity
International Nuclear Information System (INIS)
Damasceno, Ralf M.; Barros, Ricardo C.
2009-01-01
This paper describes a computational applicative (software) that modelling the simply radioactive decay, the stable nuclei decay, and tbe chain decay directly coupled with superior limit of thirteen radioactive decays, and a internal data bank with the decay constants of the various existent decays, facilitating considerably the use of program by people who does not have access to the program are not connected to the nuclear area; this makes access of the program to people that do not have acknowledgment of that area. The paper presents numerical results for typical problem-models
Conceptual design of pipe whip restraints using interactive computer analysis
International Nuclear Information System (INIS)
Rigamonti, G.; Dainora, J.
1975-01-01
Protection against pipe break effects necessitates a complex interaction between failure mode analysis, piping layout, and structural design. Many iterations are required to finalize structural designs and equipment arrangements. The magnitude of the pipe break loads transmitted by the pipe whip restraints to structural embedments precludes the application of conservative design margins. A simplified analytical formulation of the nonlinear dynamic problems associated with pipe whip has been developed and applied using interactive computer analysis techniques. In the dynamic analysis, the restraint and the associated portion of the piping system, are modeled using the finite element lumped mass approach to properly reflect the dynamic characteristics of the piping/restraint system. The analysis is performed as a series of piecewise linear increments. Each of these linear increments is terminated by either formation of plastic conditions or closing/opening of gaps. The stiffness matrix is modified to reflect the changed stiffness characteristics of the system and re-started using the previous boundary conditions. The formation of yield hinges are related to the plastic moment of the section and unloading paths are automatically considered. The conceptual design of the piping/restraint system is performed using interactive computer analysis. The application of the simplified analytical approach with interactive computer analysis results in an order of magnitude reduction in engineering time and computer cost. (Auth.)
Energy Technology Data Exchange (ETDEWEB)
Ham, Tae K., E-mail: taekyu8@gmail.com [Nuclear Engineering Program, The Ohio State University, Columbus, OH 43210 (United States); Arcilesi, David J., E-mail: arcilesi.1@osu.edu [Nuclear Engineering Program, The Ohio State University, Columbus, OH 43210 (United States); Kim, In H., E-mail: ihkim0730@gmail.com [Nuclear Engineering Program, The Ohio State University, Columbus, OH 43210 (United States); Sun, Xiaodong, E-mail: sun.200@osu.edu [Nuclear Engineering Program, The Ohio State University, Columbus, OH 43210 (United States); Christensen, Richard N., E-mail: rchristensen@uidaho.edu [Nuclear Engineering Program, The Ohio State University, Columbus, OH 43210 (United States); Oh, Chang H. [Idaho National Laboratory, Idaho Falls, ID 83402 (United States); Kim, Eung S., E-mail: kes7741@snu.ac.kr [Idaho National Laboratory, Idaho Falls, ID 83402 (United States)
2016-04-15
Highlights: • Uncertainty quantification and benchmark study are performed to validate an ANSYS FLUENT computer model for a depressurization process in a high-temperature gas-cooled reactor. • An ANSYS FLUENT computer model of a 1/8th scaled-down geometry of a VHTR hot exit plenum is presented, which is similar to the experimental test facility that has been constructed at The Ohio State University. • Using the computer model of the scaled-down geometry, the effects of the depressurization process and flow oscillations on the subsequent density-driven stratified flow phenomenology are examined computationally. • The effects of the scaled-down hot exit plenum internal structure temperature on the density-driven stratified flow phenomenology are investigated numerically. - Abstract: An air-ingress accident is considered to be one of the design basis accidents of a very high-temperature gas-cooled reactor (VHTR). The air-ingress accident is initiated, in its worst-case scenario, by a complete break of the hot duct in what is referred to as a double-ended guillotine break. This leads to an initial loss of the primary helium coolant via depressurization. Following the depressurization process, the air–helium mixture in the reactor cavity could enter the reactor core via the hot duct and hot exit plenum. In the event that air ingresses into the reactor vessel, the high-temperature graphite structures in the reactor core and hot plenum will chemically react with the air, which could lead to damage of in-core graphite structures and fuel, release of carbon monoxide and carbon dioxide, core heat up, failure of the structural integrity of the system, and eventually the release of radionuclides to the environment. Studies in the available literature focus on the phenomena of the air ingress accident that occur after the termination of the depressurization, such as density-driven stratified flow, molecular diffusion, and natural circulation. However, a recent study
Computational Design Modelling : Proceedings of the Design Modelling Symposium
Kilian, Axel; Palz, Norbert; Scheurer, Fabian
2012-01-01
This book publishes the peer-reviewed proceeding of the third Design Modeling Symposium Berlin . The conference constitutes a platform for dialogue on experimental practice and research within the field of computationally informed architectural design. More than 60 leading experts the computational processes within the field of computationally informed architectural design to develop a broader and less exotic building practice that bears more subtle but powerful traces of the complex tool set and approaches we have developed and studied over recent years. The outcome are new strategies for a reasonable and innovative implementation of digital potential in truly innovative and radical design guided by both responsibility towards processes and the consequences they initiate.
Toward a computational model of hemostasis
Leiderman, Karin; Danes, Nicholas; Schoeman, Rogier; Neeves, Keith
2017-11-01
Hemostasis is the process by which a blood clot forms to prevent bleeding at a site of injury. The formation time, size and structure of a clot depends on the local hemodynamics and the nature of the injury. Our group has previously developed computational models to study intravascular clot formation, a process confined to the interior of a single vessel. Here we present the first stage of an experimentally-validated, computational model of extravascular clot formation (hemostasis) in which blood through a single vessel initially escapes through a hole in the vessel wall and out a separate injury channel. This stage of the model consists of a system of partial differential equations that describe platelet aggregation and hemodynamics, solved via the finite element method. We also present results from the analogous, in vitro, microfluidic model. In both models, formation of a blood clot occludes the injury channel and stops flow from escaping while blood in the main vessel retains its fluidity. We discuss the different biochemical and hemodynamic effects on clot formation using distinct geometries representing intra- and extravascular injuries.
Computational Fluid Dynamics Modeling of Bacillus anthracis ...
Journal Article Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived from computed tomography (CT) or µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation-exhalation breathing conditions using average species-specific minute volumes. Four different exposure scenarios were modeled in the rabbit based upon experimental inhalation studies. For comparison, human simulations were conducted at the highest exposure concentration used during the rabbit experimental exposures. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Despite the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the upper conducting airways of the human at the same air concentration of anthrax spores. This greater deposition of spores in the upper airways in the human resulted in lower penetration and deposition in the tracheobronchial airways and the deep lung than that predict
Efficient Use of Preisach Hysteresis Model in Computer Aided Design
Directory of Open Access Journals (Sweden)
IONITA, V.
2013-05-01
Full Text Available The paper presents a practical detailed analysis regarding the use of the classical Preisach hysteresis model, covering all the steps, from measuring the necessary data for the model identification to the implementation in a software code for Computer Aided Design (CAD in Electrical Engineering. An efficient numerical method is proposed and the hysteresis modeling accuracy is tested on magnetic recording materials. The procedure includes the correction of the experimental data, which are used for the hysteresis model identification, taking into account the demagnetizing effect for the sample that is measured in an open-circuit device (a vibrating sample magnetometer.
International Nuclear Information System (INIS)
Cacciabue, P.C.; Fremont, R. de; Renard, A.
1982-01-01
The Report gives the results of comparative calculations performed by the Whole Core Accident Codes Group which is a subgroup of the Safety Working Group of the Fast Reactor Coordinating Committee for a hypothetical transient overpower accident in an irradiated LMFBR core. Different computer codes from members of the European Community and the United States were used. The calculations are based on a Benchmark problem, using commonly agreed input data for the most important phenomena, such as the fuel pin failure threshold, FCl parameters, etc. Beside this, results with alternative assumptions for theoretical modelling are presented with the scope to show in a parametric way the influence of more advanced modelling capabilities and/or better (so-called best estimate) input data for the most important phenomena on the accident sequences
Computer Modeling of Human Delta Opioid Receptor
Directory of Open Access Journals (Sweden)
Tatyana Dzimbova
2013-04-01
Full Text Available The development of selective agonists of δ-opioid receptor as well as the model of interaction of ligands with this receptor is the subjects of increased interest. In the absence of crystal structures of opioid receptors, 3D homology models with different templates have been reported in the literature. The problem is that these models are not available for widespread use. The aims of our study are: (1 to choose within recently published crystallographic structures templates for homology modeling of the human δ-opioid receptor (DOR; (2 to evaluate the models with different computational tools; and (3 to precise the most reliable model basing on correlation between docking data and in vitro bioassay results. The enkephalin analogues, as ligands used in this study, were previously synthesized by our group and their biological activity was evaluated. Several models of DOR were generated using different templates. All these models were evaluated by PROCHECK and MolProbity and relationship between docking data and in vitro results was determined. The best correlations received for the tested models of DOR were found between efficacy (erel of the compounds, calculated from in vitro experiments and Fitness scoring function from docking studies. New model of DOR was generated and evaluated by different approaches. This model has good GA341 value (0.99 from MODELLER, good values from PROCHECK (92.6% of most favored regions and MolProbity (99.5% of favored regions. Scoring function correlates (Pearson r = -0.7368, p-value = 0.0097 with erel of a series of enkephalin analogues, calculated from in vitro experiments. So, this investigation allows suggesting a reliable model of DOR. Newly generated model of DOR receptor could be used further for in silico experiments and it will give possibility for faster and more correct design of selective and effective ligands for δ-opioid receptor.
Energy Technology Data Exchange (ETDEWEB)
Maher, A.R.; Al-Baghdadi, S. [International Technological Univ., London (United Kingdom). Dept. of Mechanical Engineering; Haroun, A.K.; Al-Janabi, S. [Babylon Univ., Babylon (Iraq). Dept. of Mechanical Engineering
2007-07-01
Fuel cell technology is expected to play an important role in meeting the growing demand for distributed generation because it can convert the chemical energy of a clean fuel directly into electrical energy. An operating fuel cell has varying local conditions of temperature, humidity, and power generation across the active area of the fuel cell in 3D. This paper presented a model that was developed to improve the basic understanding of the transport phenomena and thermal stresses in PEM fuel cells, and to investigate the behaviour of polymer membrane under hygro and thermal stresses during the cell operation. This comprehensive 3D, multiphase, non-isothermal model accounts for the major transport phenomena in a PEM fuel cell, notably convective and diffusive heat and mass transfer; electrode kinetics; transport and phase change mechanism of water; and potential fields. The model accounts for the liquid water flux inside the gas diffusion layers by viscous and capillary forces and can therefore predict the amount of liquid water inside the gas diffusion layers. This study also investigated the key parameters affecting fuel cell performance including geometry, materials and operating conditions. The model considers the many interacting, complex electrochemical, transport phenomena, thermal stresses and deformation that cannot be studied experimentally. It was concluded that the model can provide a computer-aided tool for the design and optimization of future fuel cells with much higher power density and lower cost. 21 refs., 2 tabs., 14 figs.
Validation of a phytoremediation computer model
International Nuclear Information System (INIS)
Corapcioglu, M.Y.; Sung, K.; Rhykerd, R.L.; Munster, C.; Drew, M.
1999-01-01
The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg -1 ] TNT, PBB and chrysene. Vegetated and unvegetated treatments were conducted in triplicate to obtain data regarding contaminant concentrations in the soil, plant roots, root distribution, microbial activity, plant water use and soil moisture. When given the parameters of time and depth, the model successfully predicted contaminant concentrations under actual field conditions. Other model parameters are currently being evaluated. 15 refs., 2 figs