WorldWideScience

Sample records for analytic computer model

  1. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  2. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  3. Modeling of the Global Water Cycle - Analytical Models

    Science.gov (United States)

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  4. Computing the zeros of analytic functions

    CERN Document Server

    Kravanja, Peter

    2000-01-01

    Computing all the zeros of an analytic function and their respective multiplicities, locating clusters of zeros and analytic fuctions, computing zeros and poles of meromorphic functions, and solving systems of analytic equations are problems in computational complex analysis that lead to a rich blend of mathematics and numerical analysis. This book treats these four problems in a unified way. It contains not only theoretical results (based on formal orthogonal polynomials or rational interpolation) but also numerical analysis and algorithmic aspects, implementation heuristics, and polished software (the package ZEAL) that is available via the CPC Program Library. Graduate studets and researchers in numerical mathematics will find this book very readable.

  5. Computer aided design of Langasite resonant cantilevers: analytical models and simulations

    Science.gov (United States)

    Tellier, C. R.; Leblois, T. G.; Durand, S.

    2010-05-01

    Analytical models for the piezoelectric excitation and for the wet micromachining of resonant cantilevers are proposed. Firstly, computations of metrological performances of micro-resonators allow us to select special cuts and special alignment of the cantilevers. Secondly the self-elaborated simulator TENSOSIM based on the kinematic and tensorial model furnishes etching shapes of cantilevers. As the result the number of selected cuts is reduced. Finally the simulator COMSOL® is used to evaluate the influence of final etching shape on metrological performances and especially on the resonance frequency. Changes in frequency are evaluated and deviating behaviours of structures with less favourable built-ins are tested showing that the X cut is the best cut for LGS resonant cantilevers vibrating in flexural modes (type 1 and type 2) or in torsion mode.

  6. An analytical model for backscattered luminance in fog: comparisons with Monte Carlo computations and experimental results

    International Nuclear Information System (INIS)

    Taillade, Frédéric; Dumont, Eric; Belin, Etienne

    2008-01-01

    We propose an analytical model for backscattered luminance in fog and derive an expression for the visibility signal-to-noise ratio as a function of meteorological visibility distance. The model uses single scattering processes. It is based on the Mie theory and the geometry of the optical device (emitter and receiver). In particular, we present an overlap function and take the phase function of fog into account. The results of the backscattered luminance obtained with our analytical model are compared to simulations made using the Monte Carlo method based on multiple scattering processes. An excellent agreement is found in that the discrepancy between the results is smaller than the Monte Carlo standard uncertainties. If we take no account of the geometry of the optical device, the results of the model-estimated backscattered luminance differ from the simulations by a factor 20. We also conclude that the signal-to-noise ratio computed with the Monte Carlo method and our analytical model is in good agreement with experimental results since the mean difference between the calculations and experimental measurements is smaller than the experimental uncertainty

  7. Kinetics of transformations nucleated on random parallel planes: analytical modelling and computer simulation

    International Nuclear Information System (INIS)

    Rios, Paulo R; Assis, Weslley L S; Ribeiro, Tatiana C S; Villa, Elena

    2012-01-01

    In a classical paper, Cahn derived expressions for the kinetics of transformations nucleated on random planes and lines. He used those as a model for nucleation on the boundaries, edges and vertices of a polycrystal consisting of equiaxed grains. In this paper it is demonstrated that Cahn's expression for random planes may be used in situations beyond the scope envisaged in Cahn's original paper. For instance, we derived an expression for the kinetics of transformations nucleated on random parallel planes that is identical to that formerly obtained by Cahn considering random planes. Computer simulation of transformations nucleated on random parallel planes is carried out. It is shown that there is excellent agreement between simulated results and analytical solutions. Such an agreement is to be expected if both the simulation and the analytical solution are correct. (paper)

  8. Exploratory analysis regarding the domain definitions for computer based analytical models

    Science.gov (United States)

    Raicu, A.; Oanta, E.; Barhalescu, M.

    2017-08-01

    Our previous computer based studies dedicated to structural problems using analytical methods defined the composite cross section of a beam as a result of Boolean operations with so-called ‘simple’ shapes. Using generalisations, in the class of the ‘simple’ shapes were included areas bounded by curves approximated using spline functions and areas approximated as polygons. However, particular definitions lead to particular solutions. In order to ascend above the actual limitations, we conceived a general definition of the cross sections that are considered now calculus domains consisting of several subdomains. The according set of input data use complex parameterizations. This new vision allows us to naturally assign a general number of attributes to the subdomains. In this way there may be modelled new phenomena that use map-wise information, such as the metal alloys equilibrium diagrams. The hierarchy of the input data text files that use the comma-separated-value format and their structure are also presented and discussed in the paper. This new approach allows us to reuse the concepts and part of the data processing software instruments already developed. The according software to be subsequently developed will be modularised and generalised in order to be used in the upcoming projects that require rapid development of computer based models.

  9. Computer controlled quality of analytical measurements

    International Nuclear Information System (INIS)

    Clark, J.P.; Huff, G.A.

    1979-01-01

    A PDP 11/35 computer system is used in evaluating analytical chemistry measurements quality control data at the Barnwell Nuclear Fuel Plant. This computerized measurement quality control system has several features which are not available in manual systems, such as real-time measurement control, computer calculated bias corrections and standard deviation estimates, surveillance applications, evaluaton of measurement system variables, records storage, immediate analyst recertificaton, and the elimination of routine analysis of known bench standards. The effectiveness of the Barnwell computer system has been demonstrated in gathering and assimilating the measurements of over 1100 quality control samples obtained during a recent plant demonstration run. These data were used to determine equaitons for predicting measurement reliability estimates (bias and precision); to evaluate the measurement system; and to provide direction for modification of chemistry methods. The analytical chemistry measurement quality control activities represented 10% of the total analytical chemistry effort

  10. An analytical model of the HINT performance metric

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Q.O.; Gustafson, J.L. [Scalable Computing Lab., Ames, IA (United States)

    1996-10-01

    The HINT benchmark was developed to provide a broad-spectrum metric for computers and to measure performance over the full range of memory sizes and time scales. We have extended our understanding of why HINT performance curves look the way they do and can now predict the curves using an analytical model based on simple hardware specifications as input parameters. Conversely, by fitting the experimental curves with the analytical model, hardware specifications such as memory performance can be inferred to provide insight into the nature of a given computer system.

  11. Assessment regarding the use of the computer aided analytical models in the calculus of the general strength of a ship hull

    Science.gov (United States)

    Hreniuc, V.; Hreniuc, A.; Pescaru, A.

    2017-08-01

    Solving a general strength problem of a ship hull may be done using analytical approaches which are useful to deduce the buoyancy forces distribution, the weighting forces distribution along the hull and the geometrical characteristics of the sections. These data are used to draw the free body diagrams and to compute the stresses. The general strength problems require a large amount of calculi, therefore it is interesting how a computer may be used to solve such problems. Using computer programming an engineer may conceive software instruments based on analytical approaches. However, before developing the computer code the research topic must be thoroughly analysed, in this way being reached a meta-level of understanding of the problem. The following stage is to conceive an appropriate development strategy of the original software instruments useful for the rapid development of computer aided analytical models. The geometrical characteristics of the sections may be computed using a bool algebra that operates with ‘simple’ geometrical shapes. By ‘simple’ we mean that for the according shapes we have direct calculus relations. In the set of ‘simple’ shapes we also have geometrical entities bounded by curves approximated as spline functions or as polygons. To conclude, computer programming offers the necessary support to solve general strength ship hull problems using analytical methods.

  12. Cognitive computing and big data analytics

    CERN Document Server

    Hurwitz, Judith; Bowles, Adrian

    2015-01-01

    MASTER THE ABILITY TO APPLY BIG DATA ANALYTICS TO MASSIVE AMOUNTS OF STRUCTURED AND UNSTRUCTURED DATA Cognitive computing is a technique that allows humans and computers to collaborate in order to gain insights and knowledge from data by uncovering patterns and anomalies. This comprehensive guide explains the underlying technologies, such as artificial intelligence, machine learning, natural language processing, and big data analytics. It then demonstrates how you can use these technologies to transform your organization. You will explore how different vendors and different industries are a

  13. An analytical model for an input/output-subsystem

    International Nuclear Information System (INIS)

    Roemgens, J.

    1983-05-01

    An input/output-subsystem of one or several computers if formed by the external memory units and the peripheral units of a computer system. For these subsystems mathematical models are established, taking into account the special properties of the I/O-subsystems, in order to avoid planning errors and to allow for predictions of the capacity of such systems. Here an analytical model is presented for the magnetic discs of a I/O-subsystem, using analytical methods for the individual waiting queues or waiting queue networks. Only I/O-subsystems of IBM-computer configurations are considered, which can be controlled by the MVS operating system. After a description of the hardware and software components of these I/O-systems, possible solutions from the literature are presented and discussed with respect to their applicability in IBM-I/O-subsystems. Based on these models a special scheme is developed which combines the advantages of the literature models and avoids the disadvantages in part. (orig./RW) [de

  14. Analytical calculations by computer in physics and mathematics

    International Nuclear Information System (INIS)

    Gerdt, V.P.; Tarasov, O.V.; Shirokov, D.V.

    1978-01-01

    The review of present status of analytical calculations by computer is given. Some programming systems for analytical computations are considered. Such systems as SCHOONSCHIP, CLAM, REDUCE-2, SYMBAL, CAMAL, AVTO-ANALITIK which are implemented or will be implemented in JINR, and MACSYMA - one of the most developed systems - are discussed. It is shown on the basis of mathematical operations, realized in these systems, that they are appropriated for different problems of theoretical physics and mathematics, for example, for problems of quantum field theory, celestial mechanics, general relativity and so on. Some problems solved in JINR by programming systems for analytical computations are described. The review is intended for specialists in different fields of theoretical physics and mathematics

  15. Three-dimensional time-dependent computer modeling of the electrothermal atomizers for analytical spectrometry

    Science.gov (United States)

    Tsivilskiy, I. V.; Nagulin, K. Yu.; Gilmutdinov, A. Kh.

    2016-02-01

    A full three-dimensional nonstationary numerical model of graphite electrothermal atomizers of various types is developed. The model is based on solution of a heat equation within solid walls of the atomizer with a radiative heat transfer and numerical solution of a full set of Navier-Stokes equations with an energy equation for a gas. Governing equations for the behavior of a discrete phase, i.e., atomic particles suspended in a gas (including gas-phase processes of evaporation and condensation), are derived from the formal equations molecular kinetics by numerical solution of the Hertz-Langmuir equation. The following atomizers test the model: a Varian standard heated electrothermal vaporizer (ETV), a Perkin Elmer standard THGA transversely heated graphite tube with integrated platform (THGA), and the original double-stage tube-helix atomizer (DSTHA). The experimental verification of computer calculations is carried out by a method of shadow spectral visualization of the spatial distributions of atomic and molecular vapors in an analytical space of an atomizer.

  16. MASCOTTE: analytical model of eddy current signals

    International Nuclear Information System (INIS)

    Delsarte, G.; Levy, R.

    1992-01-01

    Tube examination is a major application of the eddy current technique in the nuclear and petrochemical industries. Such examination configurations being specially adapted to analytical modes, a physical model is developed on portable computers. It includes simple approximations made possible by the effective conditions of the examinations. The eddy current signal is described by an analytical formulation that takes into account the tube dimensions, the sensor conception, the physical characteristics of the defect and the examination parameters. Moreover, the model makes it possible to associate real signals and simulated signals

  17. ClimateSpark: An in-memory distributed computing framework for big climate data analytics

    Science.gov (United States)

    Hu, Fei; Yang, Chaowei; Schnase, John L.; Duffy, Daniel Q.; Xu, Mengchao; Bowen, Michael K.; Lee, Tsengdar; Song, Weiwei

    2018-06-01

    The unprecedented growth of climate data creates new opportunities for climate studies, and yet big climate data pose a grand challenge to climatologists to efficiently manage and analyze big data. The complexity of climate data content and analytical algorithms increases the difficulty of implementing algorithms on high performance computing systems. This paper proposes an in-memory, distributed computing framework, ClimateSpark, to facilitate complex big data analytics and time-consuming computational tasks. Chunking data structure improves parallel I/O efficiency, while a spatiotemporal index is built for the chunks to avoid unnecessary data reading and preprocessing. An integrated, multi-dimensional, array-based data model (ClimateRDD) and ETL operations are developed to address big climate data variety by integrating the processing components of the climate data lifecycle. ClimateSpark utilizes Spark SQL and Apache Zeppelin to develop a web portal to facilitate the interaction among climatologists, climate data, analytic operations and computing resources (e.g., using SQL query and Scala/Python notebook). Experimental results show that ClimateSpark conducts different spatiotemporal data queries/analytics with high efficiency and data locality. ClimateSpark is easily adaptable to other big multiple-dimensional, array-based datasets in various geoscience domains.

  18. Analytic computation of average energy of neutrons inducing fission

    International Nuclear Information System (INIS)

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  19. Modelling of ballistic low energy ion solid interaction - conventional analytic theories versus computer simulations

    International Nuclear Information System (INIS)

    Littmark, U.

    1994-01-01

    The ''philosophy'' behind, and the ''psychology'' of the development from analytic theory to computer simulations in the field of atomic collisions in solids is discussed and a few examples of achievements and perspectives are given. (orig.)

  20. Strategic engineering for cloud computing and big data analytics

    CERN Document Server

    Ramachandran, Muthu; Sarwar, Dilshad

    2017-01-01

    This book demonstrates the use of a wide range of strategic engineering concepts, theories and applied case studies to improve the safety, security and sustainability of complex and large-scale engineering and computer systems. It first details the concepts of system design, life cycle, impact assessment and security to show how these ideas can be brought to bear on the modeling, analysis and design of information systems with a focused view on cloud-computing systems and big data analytics. This informative book is a valuable resource for graduate students, researchers and industry-based practitioners working in engineering, information and business systems as well as strategy. .

  1. Analytical model of impedance in elliptical beam pipes

    CERN Document Server

    Pesah, Arthur Chalom

    2017-01-01

    Beam instabilities are among the main limitations in building higher intensity accelerators. Having a good impedance model for every accelerators is necessary in order to build components that minimize the probability of instabilities caused by the interaction beam-environment and to understand what piece to change in case of intensity increasing. Most of accelerator components have their impedance simulated with finite elements method (using softwares like CST Studio), but simple components such as circular or flat pipes are modeled analytically, with a decreasing computation time and an increasing precision compared to their simulated model. Elliptical beam pipes, while being a simple component present in some accelerators, still misses a good analytical model working for the hole range of velocities and frequencies. In this report, we present a general framework to study the impedance of elliptical pipes analytically. We developed a model for both longitudinal and transverse impedance, first in the case of...

  2. Four-parameter analytical local model potential for atoms

    International Nuclear Information System (INIS)

    Fei, Yu; Jiu-Xun, Sun; Rong-Gang, Tian; Wei, Yang

    2009-01-01

    Analytical local model potential for modeling the interaction in an atom reduces the computational effort in electronic structure calculations significantly. A new four-parameter analytical local model potential is proposed for atoms Li through Lr, and the values of four parameters are shell-independent and obtained by fitting the results of X a method. At the same time, the energy eigenvalues, the radial wave functions and the total energies of electrons are obtained by solving the radial Schrödinger equation with a new form of potential function by Numerov's numerical method. The results show that our new form of potential function is suitable for high, medium and low Z atoms. A comparison among the new potential function and other analytical potential functions shows the greater flexibility and greater accuracy of the present new potential function. (atomic and molecular physics)

  3. An analytically resolved model of a potato's thermal processing using Heun functions

    Science.gov (United States)

    Vargas Toro, Agustín.

    2014-05-01

    A potato's thermal processing model is solved analytically. The model is formulated using the equation of heat diffusion in the case of a spherical potato processed in a furnace, and assuming that the potato's thermal conductivity is radially modulated. The model is solved using the method of the Laplace transform, applying Bromwich Integral and Residue Theorem. The temperatures' profile in the potato is presented as an infinite series of Heun functions. All computations are performed with computer algebra software, specifically Maple. Using the numerical values of the thermal parameters of the potato and geometric and thermal parameters of the processing furnace, the time evolution of the temperatures in different regions inside the potato are presented analytically and graphically. The duration of thermal processing in order to achieve a specified effect on the potato is computed. It is expected that the obtained analytical results will be important in food engineering and cooking engineering.

  4. Semi-analytical wave functions in relativistic average atom model for high-temperature plasmas

    International Nuclear Information System (INIS)

    Guo Yonghui; Duan Yaoyong; Kuai Bin

    2007-01-01

    The semi-analytical method is utilized for solving a relativistic average atom model for high-temperature plasmas. Semi-analytical wave function and the corresponding energy eigenvalue, containing only a numerical factor, are obtained by fitting the potential function in the average atom into hydrogen-like one. The full equations for the model are enumerated, and more attentions are paid upon the detailed procedures including the numerical techniques and computer code design. When the temperature of plasmas is comparatively high, the semi-analytical results agree quite well with those obtained by using a full numerical method for the same model and with those calculated by just a little different physical models, and the result's accuracy and computation efficiency are worthy of note. The drawbacks for this model are also analyzed. (authors)

  5. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  6. Automated statistical modeling of analytical measurement systems

    International Nuclear Information System (INIS)

    Jacobson, J.J.

    1992-01-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability

  7. Analytical models of optical response in one-dimensional semiconductors

    International Nuclear Information System (INIS)

    Pedersen, Thomas Garm

    2015-01-01

    The quantum mechanical description of the optical properties of crystalline materials typically requires extensive numerical computation. Including excitonic and non-perturbative field effects adds to the complexity. In one dimension, however, the analysis simplifies and optical spectra can be computed exactly. In this paper, we apply the Wannier exciton formalism to derive analytical expressions for the optical response in four cases of increasing complexity. Thus, we start from free carriers and, in turn, switch on electrostatic fields and electron–hole attraction and, finally, analyze the combined influence of these effects. In addition, the optical response of impurity-localized excitons is discussed. - Highlights: • Optical response of one-dimensional semiconductors including excitons. • Analytical model of excitonic Franz–Keldysh effect. • Computation of optical response of impurity-localized excitons

  8. Development of computer-based analytical tool for assessing physical protection system

    Science.gov (United States)

    Mardhi, Alim; Pengvanich, Phongphaeth

    2016-01-01

    Assessment of physical protection system effectiveness is the priority for ensuring the optimum protection caused by unlawful acts against a nuclear facility, such as unauthorized removal of nuclear materials and sabotage of the facility itself. Since an assessment based on real exercise scenarios is costly and time-consuming, the computer-based analytical tool can offer the solution for approaching the likelihood threat scenario. There are several currently available tools that can be used instantly such as EASI and SAPE, however for our research purpose it is more suitable to have the tool that can be customized and enhanced further. In this work, we have developed a computer-based analytical tool by utilizing the network methodological approach for modelling the adversary paths. The inputs are multi-elements in security used for evaluate the effectiveness of the system's detection, delay, and response. The tool has capability to analyze the most critical path and quantify the probability of effectiveness of the system as performance measure.

  9. Human performance modeling for system of systems analytics.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.; Lawton, Craig R.; Basilico, Justin Derrick; Longsine, Dennis E. (INTERA, Inc., Austin, TX); Forsythe, James Chris; Gauthier, John Henry; Le, Hai D.

    2008-10-01

    A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.

  10. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  11. Modeling Computer Virus and Its Dynamics

    Directory of Open Access Journals (Sweden)

    Mei Peng

    2013-01-01

    Full Text Available Based on that the computer will be infected by infected computer and exposed computer, and some of the computers which are in suscepitible status and exposed status can get immunity by antivirus ability, a novel coumputer virus model is established. The dynamic behaviors of this model are investigated. First, the basic reproduction number R0, which is a threshold of the computer virus spreading in internet, is determined. Second, this model has a virus-free equilibrium P0, which means that the infected part of the computer disappears, and the virus dies out, and P0 is a globally asymptotically stable equilibrium if R01 then this model has only one viral equilibrium P*, which means that the computer persists at a constant endemic level, and P* is also globally asymptotically stable. Finally, some numerical examples are given to demonstrate the analytical results.

  12. TH-C-BRD-01: Analytical Computation of Prompt Gamma Ray Emission and Detection for Proton Range Verification

    International Nuclear Information System (INIS)

    Sterpin, E; Vynckier, S; Janssens, G; Smeets, J; Prieels, D

    2014-01-01

    Purpose: A prompt gamma (PG) slit camera prototype demonstrated that on-line range monitoring within 1–2 mm could be performed by comparing expected and measured PG detection profiles. Monte Carlo (MC) can simulate the expected PG profile but this would result in prohibitive computation time for a complete pencil beam treatment plan. We implemented a much faster method that is based on analytical processing of pre-computed MC data. Methods: The formation of the PG detection signal can be separated into: 1) production of PGs and 2) detection by the camera detectors after PG transport in geometry. For proton energies from 40 to 230 MeV, PG productions in depth were pre-computed by MC (PENH) for 12C, 14N, 16O, 31P and 40Ca. The PG production was then modeled analytically by adding the PG production for each element according to local proton energy and tissue composition.PG transport in the patient/camera geometries and the detector response were modeled by convolving the PG production profile with a transfer function. The latter is interpolated from a database of transfer functions fitted to pre-computed MC data (PENELOPE). The database was generated for a photon source in a cylindrical phantom with various radiuses and a camera placed at various positions.As a benchmark, the analytical model was compared to PENH for a water phantom, a phantom with different slabs (adipose, muscle, lung) and a thoracic CT. Results: Good agreement (within 5%) was observed between the analytical model and PENH for the PG production. Similar accuracy for detecting range shifts was also observed. Speed of around 250 ms per profile was achieved (single CPU) using a non-optimized MatLab implementation. Conclusion: We devised a fast analytical model for generating PG detection profiles. In the test cases considered in this study, similar accuracy than MC was achieved for detecting range shifts. This research is supported by IBA

  13. Analytical fitting model for rough-surface BRDF.

    Science.gov (United States)

    Renhorn, Ingmar G E; Boreman, Glenn D

    2008-08-18

    A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.

  14. Construction of analytically solvable models for interacting species. [biological species competition

    Science.gov (United States)

    Rosen, G.

    1976-01-01

    The basic form of a model representation for systems of n interacting biological species is a set of essentially nonlinear autonomous ordinary differential equations. A generic canonical expression for the rate functions in the equations is reported which permits the analytical general solution to be obtained by elementary computation. It is shown that a general analytical solution is directly obtainable for models where the rate functions are prescribed by the generic canonical expression from the outset. Some illustrative examples are given which demonstrate that the generic canonical expression can be used to construct analytically solvable models for two interacting species with limit-cycle dynamics as well as for a three-species interdependence.

  15. Analytics Platform for ATLAS Computing Services

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2016-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Log file data and database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data so as to simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning tools like Spark, Jupyter, R, S...

  16. Analytical study on model tests of soil-structure interaction

    International Nuclear Information System (INIS)

    Odajima, M.; Suzuki, S.; Akino, K.

    1987-01-01

    Since nuclear power plant (NPP) structures are stiff, heavy and partly-embedded, the behavior of those structures during an earthquake depends on the vibrational characteristics of not only the structure but also the soil. Accordingly, seismic response analyses considering the effects of soil-structure interaction (SSI) are extremely important for seismic design of NPP structures. Many studies have been conducted on analytical techniques concerning SSI and various analytical models and approaches have been proposed. Based on the studies, SSI analytical codes (computer programs) for NPP structures have been improved at JINS (Japan Institute of Nuclear Safety), one of the departments of NUPEC (Nuclear Power Engineering Test Center) in Japan. These codes are soil-spring lumped-mass code (SANLUM), finite element code (SANSSI), thin layered element code (SANSOL). In proceeding with the improvement of the analytical codes, in-situ large-scale forced vibration SSI tests were performed using models simulating light water reactor buildings, and simulation analyses were performed to verify the codes. This paper presents an analytical study to demonstrate the usefulness of the codes

  17. A simple analytical model for reactive particle ignition in explosives

    Energy Technology Data Exchange (ETDEWEB)

    Tanguay, Vincent [Defence Research and Development Canada - Valcartier, 2459 Pie XI Blvd. North, Quebec, QC, G3J 1X5 (Canada); Higgins, Andrew J. [Department of Mechanical Engineering, McGill University, 817 Sherbrooke St. West, Montreal, QC, H3A 2K6 (Canada); Zhang, Fan [Defence Research and Development Canada - Suffield, P. O. Box 4000, Stn Main, Medicine Hat, AB, T1A 8K6 (Canada)

    2007-10-15

    A simple analytical model is developed to predict ignition of magnesium particles in nitromethane detonation products. The flow field is simplified by considering the detonation products as a perfect gas expanding in a vacuum in a planar geometry. This simplification allows the flow field to be solved analytically. A single particle is then introduced in this flow field. Its trajectory and heating history are computed. It is found that most of the particle heating occurs in the Taylor wave and in the quiescent flow region behind it, shortly after which the particle cools. By considering only these regions, thereby considerably simplifying the problem, the flow field can be solved analytically with a more realistic equation of state (such as JWL) and a spherical geometry. The model is used to compute the minimum charge diameter for particle ignition to occur. It is found that the critical charge diameter for particle ignition increases with particle size. These results are compared to experimental data and show good agreement. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  18. Analytical computation of prompt gamma ray emission and detection for proton range verification

    International Nuclear Information System (INIS)

    Sterpin, E; Vynckier, S; Janssens, G; Smeets, J; Stappen, François Vander; Prieels, D; Priegnitz, Marlen; Perali, Irene

    2015-01-01

    A prompt gamma (PG) slit camera prototype recently demonstrated that Bragg Peak position in a clinical proton scanned beam could be measured with 1–2 mm accuracy by comparing an expected PG detection profile to a measured one. The computation of the expected PG detection profile in the context of a clinical framework is challenging but must be solved before clinical implementation. Obviously, Monte Carlo methods (MC) can simulate the expected PG profile but at prohibitively long calculation times. We implemented a much faster method that is based on analytical processing of precomputed MC data that would allow practical evaluation of this range monitoring approach in clinical conditions.Reference PG emission profiles were generated with MC simulations (PENH) in targets consisting of either 12 C, 14 N, 16 O, 31 P or 40 Ca, with 10% of 1 H. In a given geometry, the local PG emission can then be derived by adding the contribution of each element, according to the local energy of the proton obtained by continuous slowing down approximation and the local composition. The actual incident spot size is taken into account using an optical model fitted to measurements and by super sampling the spot with several rays (up to 113). PG transport in the patient/camera geometries and the detector response are modelled by convolving the PG production profile with a transfer function. The latter is interpolated from a database of transfer functions fitted to MC data (PENELOPE) generated for a photon source in a cylindrical phantom with various radiuses and a camera placed at various positions.As a benchmark, the analytical model was compared to MC and experiments in homogeneous and heterogeneous phantoms. Comparisons with MC were also performed in a thoracic CT. For all cases, the analytical model reproduced the prediction of the position of the Bragg peak computed with MC within 1 mm for the camera in nominal configuration. When compared to measurements, the shape of the

  19. A simple stationary semi-analytical wake model

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.

    We present an idealized simple, but fast, semi-analytical algorithm for computation of stationary wind farm wind fields with a possible potential within a multi-fidelity strategy for wind farm topology optimization. Basically, the model considers wakes as linear perturbations on the ambient non......-linear. With each of these approached, a parabolic system are described, which is initiated by first considering the most upwind located turbines and subsequently successively solved in the downstream direction. Algorithms for the resulting wind farm flow fields are proposed, and it is shown that in the limit......-uniform mean wind field, although the modelling of the individual stationary wake flow fields includes non-linear terms. The simulation of the individual wake contributions are based on an analytical solution of the thin shear layer approximation of the NS equations. The wake flow fields are assumed...

  20. Analytical model for computing transient pressures and forces in the safety/relief valve discharge line. Mark I Containment Program, task number 7.1.2

    International Nuclear Information System (INIS)

    Wheeler, A.J.

    1978-02-01

    An analytical model is described that computes the transient pressures, velocities and forces in the safety/relief valve discharge line immediately after safety/relief valve opening. Equations of motion are defined for the gas-flow and water-flow models. Results are not only verified by comparing them with an earlier version of the model, but also with Quad Cities and Monticello plant data. The model shows reasonable agreement with the earlier model and the plant data

  1. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control.

    Science.gov (United States)

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-02-08

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.

  2. An analytical model on thermal performance evaluation of counter flow wet cooling tower

    Directory of Open Access Journals (Sweden)

    Wang Qian

    2017-01-01

    Full Text Available This paper proposes an analytical model for simultaneous heat and mass transfer processes in a counter flow wet cooling tower, with the assumption that the enthalpy of the saturated air is a linear function of the water surface temperature. The performance of the proposed analytical model is validated in some typical cases. The validation reveals that, when cooling range is in a certain interval, the proposed model is not only comparable with the accurate model, but also can reduce computational complexity. In addition, with the proposed analytical model, the thermal performance of the counter flow wet cooling towers in power plants is calculated. The results show that the proposed analytical model can be applied to evaluate and predict the thermal performance of counter flow wet cooling towers.

  3. Comparison of a semi-analytic and a CFD model uranium combustion to experimental data

    International Nuclear Information System (INIS)

    Clarksean, R.

    1998-01-01

    Two numerical models were developed and compared for the analysis of uranium combustion and ignition in a furnace. Both a semi-analytical solution and a computational fluid dynamics (CFD) numerical solution were obtained. Prediction of uranium oxidation rates is important for fuel storage applications, fuel processing, and the development of spent fuel metal waste forms. The semi-analytical model was based on heat transfer correlations, a semi-analytical model of flow over a flat surface, and simple radiative heat transfer from the material surface. The CFD model numerically determined the flowfield over the object of interest, calculated the heat and mass transfer to the material of interest, and calculated the radiative heat exchange of the material with the furnace. The semi-analytical model is much less detailed than the CFD model, but yields reasonable results and assists in understanding the physical process. Short computation times allowed the analyst to study numerous scenarios. The CFD model had significantly longer run times, was found to have some physical limitations that were not easily modified, but was better able to yield details of the heat and mass transfer and flow field once code limitations were overcome

  4. Analytical SN solutions in heterogeneous slabs using symbolic algebra computer programs

    International Nuclear Information System (INIS)

    Warsa, J.S.

    2002-01-01

    A modern symbolic algebra computer program, MAPLE, is used to compute solutions to the well-known analytical discrete ordinates, or S N , solutions in one-dimensional, slab geometry. Symbolic algebra programs compute the solutions with arbitrary precision and are free of spatial discretization error so they can be used to investigate new discretizations for one-dimensional slab, geometry S N methods. Pointwise scalar flux solutions are computed for several sample calculations of interest. Sample MAPLE command scripts are provided to illustrate how easily the theory can be translated into a working solution and serve as a complete tool capable of computing analytical S N solutions for mono-energetic, one-dimensional transport problems

  5. Analytical Computation of Information Rate for MIMO Channels

    Directory of Open Access Journals (Sweden)

    Jinbao Zhang

    2017-01-01

    Full Text Available Information rate for discrete signaling constellations is significant. However, the computational complexity makes information rate rather difficult to analyze for arbitrary fading multiple-input multiple-output (MIMO channels. An analytical method is proposed to compute information rate, which is characterized by considerable accuracy, reasonable complexity, and concise representation. These features will improve accuracy for performance analysis with criterion of information rate.

  6. Fog Computing: An Overview of Big IoT Data Analytics

    Directory of Open Access Journals (Sweden)

    Muhammad Rizwan Anawar

    2018-01-01

    Full Text Available A huge amount of data, generated by Internet of Things (IoT, is growing up exponentially based on nonstop operational states. Those IoT devices are generating an avalanche of information that is disruptive for predictable data processing and analytics functionality, which is perfectly handled by the cloud before explosion growth of IoT. Fog computing structure confronts those disruptions, with powerful complement functionality of cloud framework, based on deployment of micro clouds (fog nodes at proximity edge of data sources. Particularly big IoT data analytics by fog computing structure is on emerging phase and requires extensive research to produce more proficient knowledge and smart decisions. This survey summarizes the fog challenges and opportunities in the context of big IoT data analytics on fog networking. In addition, it emphasizes that the key characteristics in some proposed research works make the fog computing a suitable platform for new proliferating IoT devices, services, and applications. Most significant fog applications (e.g., health care monitoring, smart cities, connected vehicles, and smart grid will be discussed here to create a well-organized green computing paradigm to support the next generation of IoT applications.

  7. Using analytic continuation for the hadronic vacuum polarization computation

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xu; Hashimoto, Shoji; Hotzel, Grit; Jansen, Karl; Petschlies, Marcus; B, Renner Dru

    2014-11-01

    We present two examples of applications of the analytic continuation method for computing the hadronic vacuum polarization function in space- and time-like momentum regions. These examples are the Adler function and the leading order hadronic contribution to the muon anomalous magnetic moment. We comment on the feasibility of the analytic continuation method and provide an outlook for possible further applications.

  8. An Analytical Tire Model with Flexible Carcass for Combined Slips

    Directory of Open Access Journals (Sweden)

    Nan Xu

    2014-01-01

    Full Text Available The tire mechanical characteristics under combined cornering and braking/driving situations have significant effects on vehicle directional controls. The objective of this paper is to present an analytical tire model with flexible carcass for combined slip situations, which can describe tire behavior well and can also be used for studying vehicle dynamics. The tire forces and moments come mainly from the shear stress and sliding friction at the tread-road interface. In order to describe complicated tire characteristics and tire-road friction, some key factors are considered in this model: arbitrary pressure distribution; translational, bending, and twisting compliance of the carcass; dynamic friction coefficient; anisotropic stiffness properties. The analytical tire model can describe tire forces and moments accurately under combined slip conditions. Some important properties induced by flexible carcass can also be reflected. The structural parameters of a tire can be identified from tire measurements and the computational results using the analytical model show good agreement with test data.

  9. Mechanical properties of regular porous biomaterials made from truncated cube repeating unit cells: Analytical solutions and computational models.

    Science.gov (United States)

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-03-01

    Additive manufacturing (AM) has enabled fabrication of open-cell porous biomaterials based on repeating unit cells. The micro-architecture of the porous biomaterials and, thus, their physical properties could then be precisely controlled. Due to their many favorable properties, porous biomaterials manufactured using AM are considered as promising candidates for bone substitution as well as for several other applications in orthopedic surgery. The mechanical properties of such porous structures including static and fatigue properties are shown to be strongly dependent on the type of the repeating unit cell based on which the porous biomaterial is built. In this paper, we study the mechanical properties of porous biomaterials made from a relatively new unit cell, namely truncated cube. We present analytical solutions that relate the dimensions of the repeating unit cell to the elastic modulus, Poisson's ratio, yield stress, and buckling load of those porous structures. We also performed finite element modeling to predict the mechanical properties of the porous structures. The analytical solution and computational results were found to be in agreement with each other. The mechanical properties estimated using both the analytical and computational techniques were somewhat higher than the experimental data reported in one of our recent studies on selective laser melted Ti-6Al-4V porous biomaterials. In addition to porosity, the elastic modulus and Poisson's ratio of the porous structures were found to be strongly dependent on the ratio of the length of the inclined struts to that of the uninclined (i.e. vertical or horizontal) struts, α, in the truncated cube unit cell. The geometry of the truncated cube unit cell approaches the octahedral and cube unit cells when α respectively approaches zero and infinity. Consistent with those geometrical observations, the analytical solutions presented in this study approached those of the octahedral and cube unit cells when

  10. Improving Wind Turbine Drivetrain Reliability Using a Combined Experimental, Computational, and Analytical Approach

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y.; van Dam, J.; Bergua, R.; Jove, J.; Campbell, J.

    2015-03-01

    Nontorque loads induced by the wind turbine rotor overhang weight and aerodynamic forces can greatly affect drivetrain loads and responses. If not addressed properly, these loads can result in a decrease in gearbox component life. This work uses analytical modeling, computational modeling, and experimental data to evaluate a unique drivetrain design that minimizes the effects of nontorque loads on gearbox reliability: the Pure Torque(R) drivetrain developed by Alstom. The drivetrain has a hub-support configuration that transmits nontorque loads directly into the tower rather than through the gearbox as in other design approaches. An analytical model of Alstom's Pure Torque drivetrain provides insight into the relationships among turbine component weights, aerodynamic forces, and the resulting drivetrain loads. Main shaft bending loads are orders of magnitude lower than the rated torque and are hardly affected by wind conditions and turbine operations.

  11. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †

    Directory of Open Access Journals (Sweden)

    René Felix Reinhart

    2017-02-01

    Full Text Available Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.

  12. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †

    Science.gov (United States)

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-01-01

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms. PMID:28208697

  13. The use of conduction model in laser weld profile computation

    Science.gov (United States)

    Grabas, Bogusław

    2007-02-01

    Profiles of joints resulting from deep penetration laser beam welding of a flat workpiece of carbon steel were computed. A semi-analytical conduction model solved with Green's function method was used in computations. In the model, the moving heat source was attenuated exponentially in accordance with Beer-Lambert law. Computational results were compared with those in the experiment.

  14. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain; Iqbal; Muljadi, Eduard

    2015-08-24

    This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solvers that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.

  15. Verification of Decision-Analytic Models for Health Economic Evaluations: An Overview.

    Science.gov (United States)

    Dasbach, Erik J; Elbasha, Elamin H

    2017-07-01

    Decision-analytic models for cost-effectiveness analysis are developed in a variety of software packages where the accuracy of the computer code is seldom verified. Although modeling guidelines recommend using state-of-the-art quality assurance and control methods for software engineering to verify models, the fields of pharmacoeconomics and health technology assessment (HTA) have yet to establish and adopt guidance on how to verify health and economic models. The objective of this paper is to introduce to our field the variety of methods the software engineering field uses to verify that software performs as expected. We identify how many of these methods can be incorporated in the development process of decision-analytic models in order to reduce errors and increase transparency. Given the breadth of methods used in software engineering, we recommend a more in-depth initiative to be undertaken (e.g., by an ISPOR-SMDM Task Force) to define the best practices for model verification in our field and to accelerate adoption. Establishing a general guidance for verifying models will benefit the pharmacoeconomics and HTA communities by increasing accuracy of computer programming, transparency, accessibility, sharing, understandability, and trust of models.

  16. Qudit quantum computation in the Jaynes-Cummings model

    DEFF Research Database (Denmark)

    Mischuck, Brian; Mølmer, Klaus

    2013-01-01

    We have developed methods for performing qudit quantum computation in the Jaynes-Cummings model with the qudits residing in a finite subspace of individual harmonic oscillator modes, resonantly coupled to a spin-1/2 system. The first method determines analytical control sequences for the one......- and two-qudit gates necessary for universal quantum computation by breaking down the desired unitary transformations into a series of state preparations implemented with the Law-Eberly scheme [ Law and Eberly Phys. Rev. Lett. 76 1055 (1996)]. The second method replaces some of the analytical pulse...

  17. Analytical modelling of hydrogen transport in reactor containments

    International Nuclear Information System (INIS)

    Manno, V.P.

    1983-09-01

    A versatile computational model of hydrogen transport in nuclear plant containment buildings is developed. The background and significance of hydrogen-related nuclear safety issues are discussed. A computer program is constructed that embodies the analytical models. The thermofluid dynamic formulation spans a wide applicability range from rapid two-phase blowdown transients to slow incompressible hydrogen injection. Detailed ancillary models of molecular and turbulent diffusion, mixture transport properties, multi-phase multicomponent thermodynamics and heat sink modelling are addressed. The numerical solution of the continuum equations emphasizes both accuracy and efficiency in the employment of relatively coarse discretization and long time steps. Reducing undesirable numerical diffusion is addressed. Problem geometry options include lumped parameter zones, one dimensional meshs, two dimensional Cartesian or axisymmetric coordinate systems and three dimensional Cartesian or cylindrical regions. An efficient lumped nodal model is included for simulation of events in which spatial resolution is not significant. Several validation calculations are reported

  18. Data analytics in the ATLAS Distributed Computing

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2015-01-01

    The ATLAS Data analytics effort is focused on creating systems which provide the ATLAS ADC with new capabilities for understanding distributed systems and overall operational performance. These capabilities include: warehousing information from multiple systems (the production and distributed analysis system - PanDA, the distributed data management system - Rucio, the file transfer system, various monitoring services etc. ); providing a platform to execute arbitrary data mining and machine learning algorithms over aggregated data; satisfy a variety of use cases for different user roles; host new third party analytics services on a scalable compute platform. We describe the implemented system where: data sources are existing RDBMS (Oracle) and Flume collectors; a Hadoop cluster is used to store the data; native Hadoop and Apache Pig scripts are used for data aggregation; and R for in-depth analytics. Part of the data is indexed in ElasticSearch so both simpler investigations and complex dashboards can be made ...

  19. Analytical predictions of SGEMP response and comparisons with computer calculations

    International Nuclear Information System (INIS)

    de Plomb, E.P.

    1976-01-01

    An analytical formulation for the prediction of SGEMP surface current response is presented. Only two independent dimensionless parameters are required to predict the peak magnitude and rise time of SGEMP induced surface currents. The analysis applies to limited (high fluence) emission as well as unlimited (low fluence) emission. Cause-effect relationships for SGEMP response are treated quantitatively, and yield simple power law dependencies between several physical variables. Analytical predictions for a large matrix of SGEMP cases are compared with an array of about thirty-five computer solutions of similar SGEMP problems, which were collected from three independent research groups. The theoretical solutions generally agree with the computer solutions as well as the computer solutions agree with one another. Such comparisons typically show variations less than a ''factor of two.''

  20. Reliable computation of roots in analytical waveguide modeling using an interval-Newton approach and algorithmic differentiation.

    Science.gov (United States)

    Bause, Fabian; Walther, Andrea; Rautenberg, Jens; Henning, Bernd

    2013-12-01

    For the modeling and simulation of wave propagation in geometrically simple waveguides such as plates or rods, one may employ the analytical global matrix method. That is, a certain (global) matrix depending on the two parameters wavenumber and frequency is built. Subsequently, one must calculate all parameter pairs within the domain of interest where the global matrix becomes singular. For this purpose, one could compute all roots of the determinant of the global matrix when the two parameters vary in the given intervals. This requirement to calculate all roots is actually the method's most concerning restriction. Previous approaches are based on so-called mode-tracers, which use the physical phenomenon that solutions, i.e., roots of the determinant of the global matrix, appear in a certain pattern, the waveguide modes, to limit the root-finding algorithm's search space with respect to consecutive solutions. In some cases, these reductions of the search space yield only an incomplete set of solutions, because some roots may be missed as a result of uncertain predictions. Therefore, we propose replacement of the mode-tracer approach with a suitable version of an interval- Newton method. To apply this interval-based method, we extended the interval and derivative computation provided by a numerical computing environment such that corresponding information is also available for Bessel functions used in circular models of acoustic waveguides. We present numerical results for two different scenarios. First, a polymeric cylindrical waveguide is simulated, and second, we show simulation results of a one-sided fluid-loaded plate. For both scenarios, we compare results obtained with the proposed interval-Newton algorithm and commercial software.

  1. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  2. Physical-analytical model for cesium/oxygen coadsorption on tungsten

    International Nuclear Information System (INIS)

    Rasor, N.S.

    1992-01-01

    In this paper a physical-analytical model is formulated for computing the emission and vaporization properties of a surface immersed in a multi-species vapor. The evaporation and condensation processes are assumed to be identical to those for an equilibrium adsorbed phase in equilibrium with its vapor, permitting statistical mechanical computation of the sticking coefficient for the practical non-equilibrium condensation condition. Two classes of adsorption sites are defined corresponding to superficial and interstitial coadsorption. The work function is computed by a self-consistent summation over the dipole moments of the various coadsorbed species in their mutual electric field. The model adequately describes observed emission and evaporation from tungsten surfaces immersed in pure cesium vapor and in pure oxygen vapor. Using available and estimated properties for 17 species of cesium, oxygen, tungsten and their compounds, the computed work function for tungsten immersed in Cs/O vapor is compared with limited available experimental data, and the basic phenomenology of Cs/O coadsorption electrodes is discussed

  3. ENVIRONMENTAL RESEARCH BRIEF : ANALYTIC ELEMENT MODELING OF GROUND-WATER FLOW AND HIGH PERFORMANCE COMPUTING

    Science.gov (United States)

    Several advances in the analytic element method have been made to enhance its performance and facilitate three-dimensional ground-water flow modeling in a regional aquifer setting. First, a new public domain modular code (ModAEM) has been developed for modeling ground-water flow ...

  4. An Analytical Model for Prediction of Magnetic Flux Leakage from Surface Defects in Ferromagnetic Tubes

    Directory of Open Access Journals (Sweden)

    Suresh V.

    2016-02-01

    Full Text Available In this paper, an analytical model is proposed to predict magnetic flux leakage (MFL signals from the surface defects in ferromagnetic tubes. The analytical expression consists of elliptic integrals of first kind based on the magnetic dipole model. The radial (Bz component of leakage fields is computed from the cylindrical holes in ferromagnetic tubes. The effectiveness of the model has been studied by analyzing MFL signals as a function of the defect parameters and lift-off. The model predicted results are verified with experimental results and a good agreement is observed between the analytical and the experimental results. This analytical expression could be used for quick prediction of MFL signals and also input data for defect reconstructions in inverse MFL problem.

  5. Subject-enabled analytics model on measurement statistics in health risk expert system for public health informatics.

    Science.gov (United States)

    Chung, Chi-Jung; Kuo, Yu-Chen; Hsieh, Yun-Yu; Li, Tsai-Chung; Lin, Cheng-Chieh; Liang, Wen-Miin; Liao, Li-Na; Li, Chia-Ing; Lin, Hsueh-Chun

    2017-11-01

    This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 10 5 sets per second. The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. A Computer Library for Ray Tracing in Analytical Media

    International Nuclear Information System (INIS)

    Miqueles, Eduardo; Coimbra, Tiago A; Figueiredo, J J S de

    2013-01-01

    Ray tracing technique is an important tool not only for forward but also for inverse problems in Geophysics, which most of the seismic processing steps depends on. However, implementing ray tracing codes can be very time consuming. This article presents a computer library to trace rays in 2.5D media composed by stack of layers. The velocity profile inside each layer is such that the eikonal equation can be analitically solved. Therefore, the ray tracing within such profile is made fast and accurately. The great advantage of an analytical ray tracing library is the numerical precision of the quantities computed and the fast execution of the implemented codes. Although ray tracing programs already exist for a long time, for example the seis package by Cervený, with a numerical approach to compute the ray. Regardless of the fact that numerical methods can solve more general problems, the analytical ones could be part of a more sofisticated simulation process, where the ray tracing time is completely relevant. We demonstrate the feasibility of our codes using numerical examples.

  7. Pulse cleaning flow models and numerical computation of candle ceramic filters.

    Science.gov (United States)

    Tian, Gui-shan; Ma, Zhen-ji; Zhang, Xin-yi; Xu, Ting-xiang

    2002-04-01

    Analytical and numerical computed models are developed for reverse pulse cleaning system of candle ceramic filters. A standard turbulent model is demonstrated suitably to the designing computation of reverse pulse cleaning system from the experimental and one-dimensional computational result. The computed results can be used to guide the designing of reverse pulse cleaning system, which is optimum Venturi geometry. From the computed results, the general conclusions and the designing methods are obtained.

  8. Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.

    Science.gov (United States)

    Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo

    2017-01-01

    The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.

  9. Analytic nuclear scattering theories

    International Nuclear Information System (INIS)

    Di Marzio, F.; University of Melbourne, Parkville, VIC

    1999-01-01

    A wide range of nuclear reactions are examined in an analytical version of the usual distorted wave Born approximation. This new approach provides either semi analytic or fully analytic descriptions of the nuclear scattering processes. The resulting computational simplifications, when used within the limits of validity, allow very detailed tests of both nuclear interaction models as well as large basis models of nuclear structure to be performed

  10. Evaluation of one dimensional analytical models for vegetation canopies

    Science.gov (United States)

    Goel, Narendra S.; Kuusk, Andres

    1992-01-01

    The SAIL model for one-dimensional homogeneous vegetation canopies has been modified to include the specular reflectance and hot spot effects. This modified model and the Nilson-Kuusk model are evaluated by comparing the reflectances given by them against those given by a radiosity-based computer model, Diana, for a set of canopies, characterized by different leaf area index (LAI) and leaf angle distribution (LAD). It is shown that for homogeneous canopies, the analytical models are generally quite accurate in the visible region, but not in the infrared region. For architecturally realistic heterogeneous canopies of the type found in nature, these models fall short. These shortcomings are quantified.

  11. Coupling Numerical Methods and Analytical Models for Ducted Turbines to Evaluate Designs

    Directory of Open Access Journals (Sweden)

    Bradford Knight

    2018-04-01

    Full Text Available Hydrokinetic turbines extract energy from currents in oceans, rivers, and streams. Ducts can be used to accelerate the flow across the turbine to improve performance. The objective of this work is to couple an analytical model with a Reynolds averaged Navier–Stokes (RANS computational fluid dynamics (CFD solver to evaluate designs. An analytical model is derived for ducted turbines. A steady-state moving reference frame solver is used to analyze both the freestream and ducted turbine. A sliding mesh solver is examined for the freestream turbine. An efficient duct is introduced to accelerate the flow at the turbine. Since the turbine is optimized for operation in the freestream and not within the duct, there is a decrease in efficiency due to duct-turbine interaction. Despite the decrease in efficiency, the power extracted by the turbine is increased. The analytical model under-predicts the flow rejection from the duct that is predicted by CFD since the CFD predicts separation but the analytical model does not. Once the mass flow rate is corrected, the model can be used as a design tool to evaluate how the turbine-duct pair reduces mass flow efficiency. To better understand this phenomenon, the turbine is also analyzed within a tube with the analytical model and CFD. The analytical model shows that the duct’s mass flow efficiency reduces as a function of loading, showing that the system will be more efficient when lightly loaded. Using the conclusions of the analytical model, a more efficient ducted turbine system is designed. The turbine is pitched more heavily and the twist profile is adapted to the radial throat velocity profile.

  12. Security Management Model in Cloud Computing Environment

    OpenAIRE

    Ahmadpanah, Seyed Hossein

    2016-01-01

    In the cloud computing environment, cloud virtual machine (VM) will be more and more the number of virtual machine security and management faced giant Challenge. In order to address security issues cloud computing virtualization environment, this paper presents a virtual machine based on efficient and dynamic deployment VM security management model state migration and scheduling, study of which virtual machine security architecture, based on AHP (Analytic Hierarchy Process) virtual machine de...

  13. Enabling Analytics on Sensitive Medical Data with Secure Multi-Party Computation.

    Science.gov (United States)

    Veeningen, Meilof; Chatterjea, Supriyo; Horváth, Anna Zsófia; Spindler, Gerald; Boersma, Eric; van der Spek, Peter; van der Galiën, Onno; Gutteling, Job; Kraaij, Wessel; Veugen, Thijs

    2018-01-01

    While there is a clear need to apply data analytics in the healthcare sector, this is often difficult because it requires combining sensitive data from multiple data sources. In this paper, we show how the cryptographic technique of secure multi-party computation can enable such data analytics by performing analytics without the need to share the underlying data. We discuss the issue of compliance to European privacy legislation; report on three pilots bringing these techniques closer to practice; and discuss the main challenges ahead to make fully privacy-preserving data analytics in the medical sector commonplace.

  14. Entry format for representation of analytical relations on the ES computer

    International Nuclear Information System (INIS)

    Katan, I.B.; Sal'nikova, O.V.; Blokhin, A.I.

    1981-01-01

    Structure and description of an input format for representation of analytical relations on the ES compUter as well as dictionaries of key words and system identificators for thermal-physical and hydraulic data are presented. It is shown that the format considered can be the basis at the formation of library of analytical relations [ru

  15. Analytical modeling of Schottky tunneling source impact ionization MOSFET with reduced breakdown voltage

    Directory of Open Access Journals (Sweden)

    Sangeeta Singh

    2016-03-01

    Full Text Available In this paper, we have investigated a novel Schottky tunneling source impact ionization MOSFET (STS-IMOS to lower the breakdown voltage of conventional impact ionization MOS (IMOS and developed an analytical model for the same. In STS-IMOS there is an accumulative effect of both impact ionization and source induced barrier tunneling. The silicide source offers very low parasitic resistance, the outcome of which is an increment in voltage drop across the intrinsic region for the same applied bias. This reduces operating voltage and hence, it exhibits a significant reduction in both breakdown and threshold voltage. STS-IMOS shows high immunity against hot electron damage. As a result of this the device reliability increases magnificently. The analytical model for impact ionization current (Iii is developed based on the integration of ionization integral (M. Similarly, to get Schottky tunneling current (ITun expression, Wentzel–Kramers–Brillouin (WKB approximation is employed. Analytical models for threshold voltage and subthreshold slope is optimized against Schottky barrier height (ϕB variation. The expression for the drain current is computed as a function of gate-to-drain bias via integral expression. It is validated by comparing it with the technology computer-aided design (TCAD simulation results as well. In essence, this analytical framework provides the physical background for better understanding of STS-IMOS and its performance estimation.

  16. Self-consistent semi-analytic models of the first stars

    Science.gov (United States)

    Visbal, Eli; Haiman, Zoltán; Bryan, Greg L.

    2018-04-01

    We have developed a semi-analytic framework to model the large-scale evolution of the first Population III (Pop III) stars and the transition to metal-enriched star formation. Our model follows dark matter haloes from cosmological N-body simulations, utilizing their individual merger histories and three-dimensional positions, and applies physically motivated prescriptions for star formation and feedback from Lyman-Werner (LW) radiation, hydrogen ionizing radiation, and external metal enrichment due to supernovae winds. This method is intended to complement analytic studies, which do not include clustering or individual merger histories, and hydrodynamical cosmological simulations, which include detailed physics, but are computationally expensive and have limited dynamic range. Utilizing this technique, we compute the cumulative Pop III and metal-enriched star formation rate density (SFRD) as a function of redshift at z ≥ 20. We find that varying the model parameters leads to significant qualitative changes in the global star formation history. The Pop III star formation efficiency and the delay time between Pop III and subsequent metal-enriched star formation are found to have the largest impact. The effect of clustering (i.e. including the three-dimensional positions of individual haloes) on various feedback mechanisms is also investigated. The impact of clustering on LW and ionization feedback is found to be relatively mild in our fiducial model, but can be larger if external metal enrichment can promote metal-enriched star formation over large distances.

  17. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    Science.gov (United States)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  18. Computational red teaming risk analytics of big-data-to-decisions intelligent systems

    CERN Document Server

    Abbass, Hussein A

    2015-01-01

    Written to bridge the information needs of management and computational scientists, this book presents the first comprehensive treatment of Computational Red Teaming (CRT).  The author describes an analytics environment that blends human reasoning and computational modeling to design risk-aware and evidence-based smart decision making systems. He presents the Shadow CRT Machine, which shadows the operations of an actual system to think with decision makers, challenge threats, and design remedies. This is the first book to generalize red teaming (RT) outside the military and security domains and it offers coverage of RT principles, practical and ethical guidelines. The author utilizes Gilbert’s principles for introducing a science. Simplicity: where the book follows a special style to make it accessible to a wide range of  readers. Coherence:  where only necessary elements from experimentation, optimization, simulation, data mining, big data, cognitive information processing, and system thinking are blend...

  19. Toward an in-situ analytics and diagnostics framework for earth system models

    Science.gov (United States)

    Anantharaj, Valentine; Wolf, Matthew; Rasch, Philip; Klasky, Scott; Williams, Dean; Jacob, Rob; Ma, Po-Lun; Kuo, Kwo-Sen

    2017-04-01

    , atmospheric rivers, blizzards, etc. It is evident that ESMs need an in-situ framework to decouple the diagnostics and analytics from the prognostics and physics computations of the models so that the diagnostic computations could be performed concurrently without limiting model throughput. We are designing a science-driven online analytics framework for earth system models. Our approach is to adopt several data workflow technologies, such as the Adaptable IO System (ADIOS), being developed under the U.S. Exascale Computing Project (ECP) and integrate these to allow for extreme performance IO, in situ workflow integration, science-driven analytics and visualization all in a easy to use computational framework. This will allow science teams to write data 100-1000 times faster and seamlessly move from post processing the output for validation and verification purposes to performing these calculations in situ. We can easily and knowledgeably envision a near-term future where earth system models like ACME and CESM will have to address not only the challenges of the volume of data but also need to consider the velocity of the data. The earth system model of the future in the exascale era, as they incorporate more complex physics at higher resolutions, will be able to analyze more simulation content without having to compromise targeted model throughput.

  20. Theory, Modeling, Software and Hardware Development for Analytical and Computational Materials Science

    Science.gov (United States)

    Young, Gerald W.; Clemons, Curtis B.

    2004-01-01

    The focus of this Cooperative Agreement between the Computational Materials Laboratory (CML) of the Processing Science and Technology Branch of the NASA Glenn Research Center (GRC) and the Department of Theoretical and Applied Mathematics at The University of Akron was in the areas of system development of the CML workstation environment, modeling of microgravity and earth-based material processing systems, and joint activities in laboratory projects. These efforts complement each other as the majority of the modeling work involves numerical computations to support laboratory investigations. Coordination and interaction between the modelers, system analysts, and laboratory personnel are essential toward providing the most effective simulations and communication of the simulation results. Toward these means, The University of Akron personnel involved in the agreement worked at the Applied Mathematics Research Laboratory (AMRL) in the Department of Theoretical and Applied Mathematics while maintaining a close relationship with the personnel of the Computational Materials Laboratory at GRC. Network communication between both sites has been established. A summary of the projects we undertook during the time period 9/1/03 - 6/30/04 is included.

  1. Development of computer-based analytical tool for assessing physical protection system

    Energy Technology Data Exchange (ETDEWEB)

    Mardhi, Alim, E-mail: alim-m@batan.go.id [National Nuclear Energy Agency Indonesia, (BATAN), PUSPIPTEK area, Building 80, Serpong, Tangerang Selatan, Banten (Indonesia); Chulalongkorn University, Faculty of Engineering, Nuclear Engineering Department, 254 Phayathai Road, Pathumwan, Bangkok Thailand. 10330 (Thailand); Pengvanich, Phongphaeth, E-mail: ppengvan@gmail.com [Chulalongkorn University, Faculty of Engineering, Nuclear Engineering Department, 254 Phayathai Road, Pathumwan, Bangkok Thailand. 10330 (Thailand)

    2016-01-22

    Assessment of physical protection system effectiveness is the priority for ensuring the optimum protection caused by unlawful acts against a nuclear facility, such as unauthorized removal of nuclear materials and sabotage of the facility itself. Since an assessment based on real exercise scenarios is costly and time-consuming, the computer-based analytical tool can offer the solution for approaching the likelihood threat scenario. There are several currently available tools that can be used instantly such as EASI and SAPE, however for our research purpose it is more suitable to have the tool that can be customized and enhanced further. In this work, we have developed a computer–based analytical tool by utilizing the network methodological approach for modelling the adversary paths. The inputs are multi-elements in security used for evaluate the effectiveness of the system’s detection, delay, and response. The tool has capability to analyze the most critical path and quantify the probability of effectiveness of the system as performance measure.

  2. Development of computer-based analytical tool for assessing physical protection system

    International Nuclear Information System (INIS)

    Mardhi, Alim; Pengvanich, Phongphaeth

    2016-01-01

    Assessment of physical protection system effectiveness is the priority for ensuring the optimum protection caused by unlawful acts against a nuclear facility, such as unauthorized removal of nuclear materials and sabotage of the facility itself. Since an assessment based on real exercise scenarios is costly and time-consuming, the computer-based analytical tool can offer the solution for approaching the likelihood threat scenario. There are several currently available tools that can be used instantly such as EASI and SAPE, however for our research purpose it is more suitable to have the tool that can be customized and enhanced further. In this work, we have developed a computer–based analytical tool by utilizing the network methodological approach for modelling the adversary paths. The inputs are multi-elements in security used for evaluate the effectiveness of the system’s detection, delay, and response. The tool has capability to analyze the most critical path and quantify the probability of effectiveness of the system as performance measure

  3. Computational Models of Rock Failure

    Science.gov (United States)

    May, Dave A.; Spiegelman, Marc

    2017-04-01

    Practitioners in computational geodynamics, as per many other branches of applied science, typically do not analyse the underlying PDE's being solved in order to establish the existence or uniqueness of solutions. Rather, such proofs are left to the mathematicians, and all too frequently these results lag far behind (in time) the applied research being conducted, are often unintelligible to the non-specialist, are buried in journals applied scientists simply do not read, or simply have not been proven. As practitioners, we are by definition pragmatic. Thus, rather than first analysing our PDE's, we first attempt to find approximate solutions by throwing all our computational methods and machinery at the given problem and hoping for the best. Typically this approach leads to a satisfactory outcome. Usually it is only if the numerical solutions "look odd" that we start delving deeper into the math. In this presentation I summarise our findings in relation to using pressure dependent (Drucker-Prager type) flow laws in a simplified model of continental extension in which the material is assumed to be an incompressible, highly viscous fluid. Such assumptions represent the current mainstream adopted in computational studies of mantle and lithosphere deformation within our community. In short, we conclude that for the parameter range of cohesion and friction angle relevant to studying rocks, the incompressibility constraint combined with a Drucker-Prager flow law can result in problems which have no solution. This is proven by a 1D analytic model and convincingly demonstrated by 2D numerical simulations. To date, we do not have a robust "fix" for this fundamental problem. The intent of this submission is to highlight the importance of simple analytic models, highlight some of the dangers / risks of interpreting numerical solutions without understanding the properties of the PDE we solved, and lastly to stimulate discussions to develop an improved computational model of

  4. Analytical simulation platform describing projections in computed tomography systems

    International Nuclear Information System (INIS)

    Youn, Hanbean; Kim, Ho Kyung

    2013-01-01

    To reduce the patient dose, several approaches such as spectral imaging using photon counting detectors and statistical image reconstruction, are being considered. Although image-reconstruction algorithms may significantly enhance image quality in reconstructed images with low dose, true signal-to-noise properties are mainly determined by image quality in projections. We are developing an analytical simulation platform describing projections to investigate how quantum-interaction physics in each component configuring CT systems affect image quality in projections. This simulator will be very useful for an improved design or optimization of CT systems in economy as well as the development of novel image-reconstruction algorithms. In this study, we present the progress of development of the simulation platform with an emphasis on the theoretical framework describing the generation of projection data. We have prepared the analytical simulation platform describing projections in computed tomography systems. The remained further study before the meeting includes the following: Each stage in the cascaded signal-transfer model for obtaining projections will be validated by the Monte Carlo simulations. We will build up energy-dependent scatter and pixel-crosstalk kernels, and show their effects on image quality in projections and reconstructed images. We will investigate the effects of projections obtained from various imaging conditions and system (or detector) operation parameters on reconstructed images. It is challenging to include the interaction physics due to photon-counting detectors into the simulation platform. Detailed descriptions of the simulator will be presented with discussions on its performance and limitation as well as Monte Carlo validations. Computational cost will also be addressed in detail. The proposed method in this study is simple and can be used conveniently in lab environment

  5. Original analytic solution of a half-bridge modelled as a statically indeterminate system

    Science.gov (United States)

    Oanta, Emil M.; Panait, Cornel; Raicu, Alexandra; Barhalescu, Mihaela

    2016-12-01

    The paper presents an original computer based analytical model of a half-bridge belonging to a circular settling tank. The primary unknown is computed using the force method, the coefficients of the canonical equation being calculated using either the discretization of the bending moment diagram in trapezoids, or using the relations specific to the polygons. A second algorithm based on the method of initial parameters is also presented. Analyzing the new solution we came to the conclusion that most of the computer code developed for other model may be reused. The results are useful to evaluate the behavior of the structure and to compare with the results of the finite element models.

  6. Sputtering of copper atoms by keV atomic and molecular ions A comparison of experiment with analytical and computer based models

    CERN Document Server

    Gillen, D R; Goelich,

    2002-01-01

    Non-resonant multiphoton ionisation combined with quadrupole and time-of-flight analysis has been used to measure energy distributions of sputtered copper atoms. The sputtering of a polycrystalline copper target by 3.6 keV Ar sup + , N sup + and CF sub 2 sup + and 1.8 keV N sup + and CF sub 2 sup + ion bombardment at 45 deg. has been investigated. The linear collision model in the isotropic limit fails to describe the high energy tail of the energy distributions. However the TRIM.SP computer simulation has been shown to provide a good description. The results indicate that an accurate description of sputtering by low energy, molecular ions requires the use of computer simulation rather than analytical approaches. This is particularly important when considering plasma-surface interactions in plasma etching and deposition systems.

  7. Novel approach for dam break flow modeling using computational intelligence

    Science.gov (United States)

    Seyedashraf, Omid; Mehrabi, Mohammad; Akhtari, Ali Akbar

    2018-04-01

    A new methodology based on the computational intelligence (CI) system is proposed and tested for modeling the classic 1D dam-break flow problem. The reason to seek for a new solution lies in the shortcomings of the existing analytical and numerical models. This includes the difficulty of using the exact solutions and the unwanted fluctuations, which arise in the numerical results. In this research, the application of the radial-basis-function (RBF) and multi-layer-perceptron (MLP) systems is detailed for the solution of twenty-nine dam-break scenarios. The models are developed using seven variables, i.e. the length of the channel, the depths of the up-and downstream sections, time, and distance as the inputs. Moreover, the depths and velocities of each computational node in the flow domain are considered as the model outputs. The models are validated against the analytical, and Lax-Wendroff and MacCormack FDM schemes. The findings indicate that the employed CI models are able to replicate the overall shape of the shock- and rarefaction-waves. Furthermore, the MLP system outperforms RBF and the tested numerical schemes. A new monolithic equation is proposed based on the best fitting model, which can be used as an efficient alternative to the existing piecewise analytic equations.

  8. Simulation of reactive geochemical transport in groundwater using a semi-analytical screening model

    Science.gov (United States)

    McNab, Walt W.

    1997-10-01

    A reactive geochemical transport model, based on a semi-analytical solution to the advective-dispersive transport equation in two dimensions, is developed as a screening tool for evaluating the impact of reactive contaminants on aquifer hydrogeochemistry. Because the model utilizes an analytical solution to the transport equation, it is less computationally intensive than models based on numerical transport schemes, is faster, and it is not subject to numerical dispersion effects. Although the assumptions used to construct the model preclude consideration of reactions between the aqueous and solid phases, thermodynamic mineral saturation indices are calculated to provide qualitative insight into such reactions. Test problems involving acid mine drainage and hydrocarbon biodegradation signatures illustrate the utility of the model in simulating essential hydrogeochemical phenomena.

  9. Ex Machina: Analytical platforms, Law and the Challenges of Computational Legal Science

    Directory of Open Access Journals (Sweden)

    Nicola Lettieri

    2018-04-01

    Full Text Available Over the years, computation has become a fundamental part of the scientific practice in several research fields that goes far beyond the boundaries of natural sciences. Data mining, machine learning, simulations and other computational methods lie today at the hearth of the scientific endeavour in a growing number of social research areas from anthropology to economics. In this scenario, an increasingly important role is played by analytical platforms: integrated environments allowing researchers to experiment cutting-edge data-driven and computation-intensive analyses. The paper discusses the appearance of such tools in the emerging field of computational legal science. After a general introduction to the impact of computational methods on both natural and social sciences, we describe the concept and the features of an analytical platform exploring innovative cross-methodological approaches to the academic and investigative study of crime. Stemming from an ongoing project involving researchers from law, computer science and bioinformatics, the initiative is presented and discussed as an opportunity to raise a debate about the future of legal scholarship and, inside of it, about the challenges of computational legal science.

  10. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2015-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes and cross sections. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures and 4 hours of tutorials will be delivered over the 6 days of the School.

  11. Working Towards New Transformative Geoscience Analytics Enabled by Petascale Computing

    Science.gov (United States)

    Woodcock, R.; Wyborn, L.

    2012-04-01

    Currently the top 10 supercomputers in the world are petascale and already exascale computers are being planned. Cloud computing facilities are becoming mainstream either as private or commercial investments. These computational developments will provide abundant opportunities for the earth science community to tackle the data deluge which has resulted from new instrumentation enabling data to be gathered at a greater rate and at higher resolution. Combined, the new computational environments should enable the earth sciences to be transformed. However, experience in Australia and elsewhere has shown that it is not easy to scale existing earth science methods, software and analytics to take advantage of the increased computational capacity that is now available. It is not simply a matter of 'transferring' current work practices to the new facilities: they have to be extensively 'transformed'. In particular new Geoscientific methods will need to be developed using advanced data mining, assimilation, machine learning and integration algorithms. Software will have to be capable of operating in highly parallelised environments, and will also need to be able to scale as the compute systems grow. Data access will have to improve and the earth science community needs to move from the file discovery, display and then locally download paradigm to self describing data cubes and data arrays that are available as online resources from either major data repositories or in the cloud. In the new transformed world, rather than analysing satellite data scene by scene, sensor agnostic data cubes of calibrated earth observation data will enable researchers to move across data from multiple sensors at varying spatial data resolutions. In using geophysics to characterise basement and cover, rather than analysing individual gridded airborne geophysical data sets, and then combining the results, petascale computing will enable analysis of multiple data types, collected at varying

  12. A semi-analytical bearing model considering outer race flexibility for model based bearing load monitoring

    Science.gov (United States)

    Kerst, Stijn; Shyrokau, Barys; Holweg, Edward

    2018-05-01

    This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.

  13. SALE: Safeguards Analytical Laboratory Evaluation computer code

    International Nuclear Information System (INIS)

    Carroll, D.J.; Bush, W.J.; Dolan, C.A.

    1976-09-01

    The Safeguards Analytical Laboratory Evaluation (SALE) program implements an industry-wide quality control and evaluation system aimed at identifying and reducing analytical chemical measurement errors. Samples of well-characterized materials are distributed to laboratory participants at periodic intervals for determination of uranium or plutonium concentration and isotopic distributions. The results of these determinations are statistically-evaluated, and each participant is informed of the accuracy and precision of his results in a timely manner. The SALE computer code which produces the report is designed to facilitate rapid transmission of this information in order that meaningful quality control will be provided. Various statistical techniques comprise the output of the SALE computer code. Assuming an unbalanced nested design, an analysis of variance is performed in subroutine NEST resulting in a test of significance for time and analyst effects. A trend test is performed in subroutine TREND. Microfilm plots are obtained from subroutine CUMPLT. Within-laboratory standard deviations are calculated in the main program or subroutine VAREST, and between-laboratory standard deviations are calculated in SBLV. Other statistical tests are also performed. Up to 1,500 pieces of data for each nuclear material sampled by 75 (or fewer) laboratories may be analyzed with this code. The input deck necessary to run the program is shown, and input parameters are discussed in detail. Printed output and microfilm plot output are described. Output from a typical SALE run is included as a sample problem

  14. AN ANALYTIC RADIATIVE-CONVECTIVE MODEL FOR PLANETARY ATMOSPHERES

    International Nuclear Information System (INIS)

    Robinson, Tyler D.; Catling, David C.

    2012-01-01

    We present an analytic one-dimensional radiative-convective model of the thermal structure of planetary atmospheres. Our model assumes that thermal radiative transfer is gray and can be represented by the two-stream approximation. Model atmospheres are assumed to be in hydrostatic equilibrium, with a power-law scaling between the atmospheric pressure and the gray thermal optical depth. The convective portions of our models are taken to follow adiabats that account for condensation of volatiles through a scaling parameter to the dry adiabat. By combining these assumptions, we produce simple, analytic expressions that allow calculations of the atmospheric-pressure-temperature profile, as well as expressions for the profiles of thermal radiative flux and convective flux. We explore the general behaviors of our model. These investigations encompass (1) worlds where atmospheric attenuation of sunlight is weak, which we show tend to have relatively high radiative-convective boundaries; (2) worlds with some attenuation of sunlight throughout the atmosphere, which we show can produce either shallow or deep radiative-convective boundaries, depending on the strength of sunlight attenuation; and (3) strongly irradiated giant planets (including hot Jupiters), where we explore the conditions under which these worlds acquire detached convective regions in their mid-tropospheres. Finally, we validate our model and demonstrate its utility through comparisons to the average observed thermal structure of Venus, Jupiter, and Titan, and by comparing computed flux profiles to more complex models.

  15. Markov chains analytic and Monte Carlo computations

    CERN Document Server

    Graham, Carl

    2014-01-01

    Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec

  16. Analytical computation of reflection and transmission coefficients for love waves

    International Nuclear Information System (INIS)

    Romanelli, F.; Vaccari, F.

    1995-09-01

    The computation of the transmission and reflection coefficients is an important step in the construction, if modal summation technique is used, of synthetic seismograms for 2-D or 3-D media. These coupling coefficients for Love waves at a vertical discontinuity are computed analytically. Numerical test for realistic structures show how the energy carried by an incoming mode is redistributed on the various modes existing on both sides of the vertical interface. (author). 15 refs, 8 figs

  17. A semi-analytical stationary model of a point-to-plane corona discharge

    International Nuclear Information System (INIS)

    Yanallah, K; Pontiga, F

    2012-01-01

    A semi-analytical model of a dc corona discharge is formulated to determine the spatial distribution of charged particles (electrons, negative ions and positive ions) and the electric field in pure oxygen using a point-to-plane electrode system. A key point in the modeling is the integration of Gauss' law and the continuity equation of charged species along the electric field lines, and the use of Warburg's law and the corona current–voltage characteristics as input data in the boundary conditions. The electric field distribution predicted by the model is compared with the numerical solution obtained using a finite-element technique. The semi-analytical solutions are obtained at a negligible computational cost, and provide useful information to characterize and control the corona discharge in different technological applications. (paper)

  18. Analytical local electron-electron interaction model potentials for atoms

    International Nuclear Information System (INIS)

    Neugebauer, Johannes; Reiher, Markus; Hinze, Juergen

    2002-01-01

    Analytical local potentials for modeling the electron-electron interaction in an atom reduce significantly the computational effort in electronic structure calculations. The development of such potentials has a long history, but some promising ideas have not yet been taken into account for further improvements. We determine a local electron-electron interaction potential akin to those suggested by Green et al. [Phys. Rev. 184, 1 (1969)], which are widely used in atom-ion scattering calculations, electron-capture processes, and electronic structure calculations. Generalized Yukawa-type model potentials are introduced. This leads, however, to shell-dependent local potentials, because the origin behavior of such potentials is different for different shells as has been explicated analytically [J. Neugebauer, M. Reiher, and J. Hinze, Phys. Rev. A 65, 032518 (2002)]. It is found that the parameters that characterize these local potentials can be interpolated and extrapolated reliably for different nuclear charges and different numbers of electrons. The analytical behavior of the corresponding localized Hartree-Fock potentials at the origin and at long distances is utilized in order to reduce the number of fit parameters. It turns out that the shell-dependent form of Green's potential, which we also derive, yields results of comparable accuracy using only one shell-dependent parameter

  19. A Framework for Understanding Physics Students' Computational Modeling Practices

    Science.gov (United States)

    Lunk, Brandon Robert

    their existing physics content knowledge, particularly their knowledge of analytic procedures. While this existing knowledge was often applied in inappropriate circumstances, the students were still able to display a considerable amount of understanding of the physics content and of analytic solution procedures. These observations could not be adequately accommodated by the existing literature of programming comprehension. In extending the resource framework to the task of computational modeling, I model students' practices in terms of three important elements. First, a knowledge base includes re- sources for understanding physics, math, and programming structures. Second, a mechanism for monitoring and control describes students' expectations as being directed towards numerical, analytic, qualitative or rote solution approaches and which can be influenced by the problem representation. Third, a set of solution approaches---many of which were identified in this study---describe what aspects of the knowledge base students use and how they use that knowledge to enact their expectations. This framework allows us as researchers to track student discussions and pinpoint the source of difficulties. This work opens up many avenues of potential research. First, this framework gives researchers a vocabulary for extending Resource Theory to other domains of instruction, such as modeling how physics students use graphs. Second, this framework can be used as the basis for modeling expert physicists' programming practices. Important instructional implications also follow from this research. Namely, as we broaden the use of computational modeling in the physics classroom, our instructional practices should focus on helping students understand the step-by-step nature of programming in contrast to the already salient analytic procedures.

  20. Analytical calculation of heavy quarkonia production processes in computer

    International Nuclear Information System (INIS)

    Braguta, V V; Likhoded, A K; Luchinsky, A V; Poslavsky, S V

    2014-01-01

    This report is devoted to the analytical calculation of heavy quarkonia production processes in modern experiments such as LHC, B-factories and superB-factories in computer. Theoretical description of heavy quarkonia is based on the factorization theorem. This theorem leads to special structure of the production amplitudes which can be used to develop computer algorithm which calculates these amplitudes automatically. This report is devoted to the description of this algorithm. As an example of its application we present the results of the calculation of double charmonia production in bottomonia decays and inclusive the χ cJ mesons production in pp-collisions

  1. Simplified semi-analytical model for mass transport simulation in unsaturated zone

    International Nuclear Information System (INIS)

    Sa, Bernadete L. Vieira de; Hiromoto, Goro

    2001-01-01

    This paper describes a simple model to determine the flux of radionuclides released from a concrete vault repository and its implementation through the development of a computer program. The radionuclide leach rate from waste is calculated using a model based on simple first order kinetics and the transport through porous media bellow the waste is determined using a semi-analytical solution of the mass transport equation. Results obtained in the IAEA intercomparison program are also related in this communication. (author)

  2. Essential partial differential equations analytical and computational aspects

    CERN Document Server

    Griffiths, David F; Silvester, David J

    2015-01-01

    This volume provides an introduction to the analytical and numerical aspects of partial differential equations (PDEs). It unifies an analytical and computational approach for these; the qualitative behaviour of solutions being established using classical concepts: maximum principles and energy methods.   Notable inclusions are the treatment of irregularly shaped boundaries, polar coordinates and the use of flux-limiters when approximating hyperbolic conservation laws. The numerical analysis of difference schemes is rigorously developed using discrete maximum principles and discrete Fourier analysis. A novel feature is the inclusion of a chapter containing projects, intended for either individual or group study, that cover a range of topics such as parabolic smoothing, travelling waves, isospectral matrices, and the approximation of multidimensional advection–diffusion problems.   The underlying theory is illustrated by numerous examples and there are around 300 exercises, designed to promote and test unde...

  3. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    Science.gov (United States)

    Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna

    2017-08-01

    Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  4. Analytical model of contamination during the drying of cylinders of jamonable muscle

    Science.gov (United States)

    Montoya Arroyave, Isabel

    2014-05-01

    For a cylinder of jamonable muscle of radius R and length much greater than R; considering that the internal resistance to the transfer of water is much greater than the external and that the internal resistance is one certain function of the distance to the axis; the distribution of the punctual moisture in the jamonable cylinder is analytically computed in terms of the Bessel's functions. During the process of drying and salted the jamonable cylinder is sensitive to contaminate with bacterium and protozoa that come from the environment. An analytical model of contamination is presents using the diffusion equation with sources and sinks, which is solve by the method of the Laplace transform, the Bromwich integral, the residue theorem and some special functions like Bessel and Heun. The critical times intervals of drying and salted are computed in order to obtain the minimum possible contamination. It is assumed that both external moisture and contaminants decrease exponentially with time. Contaminants profiles are plotted and discussed some possible techniques of contaminants detection. All computations are executed using Computer Algebra, specifically Maple. It is said that the results are important for the food industry and it is suggested some future research lines.

  5. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  6. Computer modeling of lung cancer diagnosis-to-treatment process.

    Science.gov (United States)

    Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U; Yu, Xinhua; Faris, Nick; Li, Jingshan

    2015-08-01

    We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed.

  7. Data analysis and analytical predictions of a steam generator tube bundle flow field for verification of 2-D T/H computer code

    International Nuclear Information System (INIS)

    Hwang, J.Y.; Reid, H.C.; Berringer, R.

    1981-01-01

    Analytical predictions of the flow field within a 60 deg segment flow model of a proposed sodium heated steam generator are compared to experimental results obtained from several axial levels between baffling. The axial/crossflow field is developed by use of alternating multi-ported baffling, accomplished by radial perforation distribution. Radial and axial porous model predictions from an axisymmetric computational analysis compared to intra-pitch experimental data at the mid baffle span location for various levels. The analytical mechanics utilizes a cylindrical, axisymmetric, finite difference model, solving conservation mass and momentum equations. 6 refs

  8. Core monitoring with analytical model adaption

    International Nuclear Information System (INIS)

    Linford, R.B.; Martin, C.L.; Parkos, G.R.; Rahnema, F.; Williams, R.D.

    1992-01-01

    The monitoring of BWR cores has evolved rapidly due to more capable computer systems, improved analytical models and new types of core instrumentation. Coupling of first principles diffusion theory models such as applied to design to the core instrumentation has been achieved by GE with an adaptive methodology in the 3D Minicore system. The adaptive methods allow definition of 'leakage parameters' which are incorporated directly into the diffusion models to enhance monitoring accuracy and predictions. These improved models for core monitoring allow for substitution of traversing in-core probe (TIP) and local power range monitor (LPRM) with calculations to continue monitoring with no loss of accuracy or reduction of thermal limits. Experience in small BWR cores has shown that with one out of three TIP machines failed there was no operating limitation or impact from the substitute calculations. Other capabilities exist in 3D Monicore to align TIPs more accurately and accommodate other types of system measurements or anomalies. 3D Monicore also includes an accurate predictive capability which uses the adaptive results from previous monitoring calculations and is used to plan and optimize reactor maneuvers/operations to improve operating efficiency and reduce support requirements

  9. Analytical probabilistic modeling of RBE-weighted dose for ion therapy

    Science.gov (United States)

    Wieser, H. P.; Hennig, P.; Wahl, N.; Bangert, M.

    2017-12-01

    Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order O(V × B^2) to O(V × B) for the expectation value and from O(V × B^4) to O(V × B^2) for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are > 99.15% for the expectation value and > 94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other

  10. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    Science.gov (United States)

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the

  11. Geometric and computer-aided spline hob modeling

    Science.gov (United States)

    Brailov, I. G.; Myasoedova, T. M.; Panchuk, K. L.; Krysova, I. V.; Rogoza, YU A.

    2018-03-01

    The paper considers acquiring the spline hob geometric model. The objective of the research is the development of a mathematical model of spline hob for spline shaft machining. The structure of the spline hob is described taking into consideration the motion in parameters of the machine tool system of cutting edge positioning and orientation. Computer-aided study is performed with the use of CAD and on the basis of 3D modeling methods. Vector representation of cutting edge geometry is accepted as the principal method of spline hob mathematical model development. The paper defines the correlations described by parametric vector functions representing helical cutting edges designed for spline shaft machining with consideration for helical movement in two dimensions. An application for acquiring the 3D model of spline hob is developed on the basis of AutoLISP for AutoCAD environment. The application presents the opportunity for the use of the acquired model for milling process imitation. An example of evaluation, analytical representation and computer modeling of the proposed geometrical model is reviewed. In the mentioned example, a calculation of key spline hob parameters assuring the capability of hobbing a spline shaft of standard design is performed. The polygonal and solid spline hob 3D models are acquired by the use of imitational computer modeling.

  12. Computer-operated analytical platform for the determination of nutrients in hydroponic systems.

    Science.gov (United States)

    Rius-Ruiz, F Xavier; Andrade, Francisco J; Riu, Jordi; Rius, F Xavier

    2014-03-15

    Hydroponics is a water, energy, space, and cost efficient system for growing plants in constrained spaces or land exhausted areas. Precise control of hydroponic nutrients is essential for growing healthy plants and producing high yields. In this article we report for the first time on a new computer-operated analytical platform which can be readily used for the determination of essential nutrients in hydroponic growing systems. The liquid-handling system uses inexpensive components (i.e., peristaltic pump and solenoid valves), which are discretely computer-operated to automatically condition, calibrate and clean a multi-probe of solid-contact ion-selective electrodes (ISEs). These ISEs, which are based on carbon nanotubes, offer high portability, robustness and easy maintenance and storage. With this new computer-operated analytical platform we performed automatic measurements of K(+), Ca(2+), NO3(-) and Cl(-) during tomato plants growth in order to assure optimal nutritional uptake and tomato production. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Nano-Modeling and Computation in Bio and Brain Dynamics

    Directory of Open Access Journals (Sweden)

    Paolo Di Sia

    2016-04-01

    Full Text Available The study of brain dynamics currently utilizes the new features of nanobiotechnology and bioengineering. New geometric and analytical approaches appear very promising in all scientific areas, particularly in the study of brain processes. Efforts to engage in deep comprehension lead to a change in the inner brain parameters, in order to mimic the external transformation by the proper use of sensors and effectors. This paper highlights some crossing research areas of natural computing, nanotechnology, and brain modeling and considers two interesting theoretical approaches related to brain dynamics: (a the memory in neural network, not as a passive element for storing information, but integrated in the neural parameters as synaptic conductances; and (b a new transport model based on analytical expressions of the most important transport parameters, which works from sub-pico-level to macro-level, able both to understand existing data and to give new predictions. Complex biological systems are highly dependent on the context, which suggests a “more nature-oriented” computational philosophy.

  14. Recent advances in computational-analytical integral transforms for convection-diffusion problems

    Science.gov (United States)

    Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.

    2017-10-01

    An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.

  15. Quality assurance of analytical, scientific, and design computer programs for nuclear power plants

    International Nuclear Information System (INIS)

    1994-06-01

    This Standard applies to the design and development, modification, documentation, execution, and configuration management of computer programs used to perform analytical, scientific, and design computations during the design and analysis of safety-related nuclear power plant equipment, systems, structures, and components as identified by the owner. 2 figs

  16. Quality assurance of analytical, scientific, and design computer programs for nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-06-01

    This Standard applies to the design and development, modification, documentation, execution, and configuration management of computer programs used to perform analytical, scientific, and design computations during the design and analysis of safety-related nuclear power plant equipment, systems, structures, and components as identified by the owner. 2 figs.

  17. Cellular Scanning Strategy for Selective Laser Melting: Capturing Thermal Trends with a Low-Fidelity, Pseudo-Analytical Model

    Directory of Open Access Journals (Sweden)

    Sankhya Mohanty

    2014-01-01

    Full Text Available Simulations of additive manufacturing processes are known to be computationally expensive. The resulting large runtimes prohibit their application in secondary analysis requiring several complete simulations such as optimization studies, and sensitivity analysis. In this paper, a low-fidelity pseudo-analytical model has been introduced to enable such secondary analysis. The model has been able to mimic a finite element model and was able to capture the thermal trends associated with the process. The model has been validated and subsequently applied in a small optimization case study. The pseudo-analytical modelling technique is established as a fast tool for primary modelling investigations.

  18. Data mining and business analytics with R

    CERN Document Server

    Ledolter, Johannes

    2013-01-01

    Collecting, analyzing, and extracting valuable information from a large amount of data requires easily accessible, robust, computational and analytical tools. Data Mining and Business Analytics with R utilizes the open source software R for the analysis, exploration, and simplification of large high-dimensional data sets. As a result, readers are provided with the needed guidance to model and interpret complicated data and become adept at building powerful models for prediction and classification. Highlighting both underlying concepts and practical computational skills, Data Mining

  19. Cost effectiveness of ovarian reserve testing in in vitro fertilization : a Markov decision-analytic model

    NARCIS (Netherlands)

    Moolenaar, Lobke M.; Broekmans, Frank J. M.; van Disseldorp, Jeroen; Fauser, Bart C. J. M.; Eijkemans, Marinus J. C.; Hompes, Peter G. A.; van der Veen, Fulco; Mol, Ben Willem J.

    2011-01-01

    Objective: To compare the cost effectiveness of ovarian reserve testing in in vitro fertilization (IVF). Design: A Markov decision model based on data from the literature and original patient data. Setting: Decision analytic framework. Patient(s): Computer-simulated cohort of subfertile women aged

  20. Analytical Computation of Energy-Energy Correlation at Next-to-Leading Order in QCD.

    Science.gov (United States)

    Dixon, Lance J; Luo, Ming-Xing; Shtabovenko, Vladyslav; Yang, Tong-Zhi; Zhu, Hua Xing

    2018-03-09

    The energy-energy correlation (EEC) between two detectors in e^{+}e^{-} annihilation was computed analytically at leading order in QCD almost 40 years ago, and numerically at next-to-leading order (NLO) starting in the 1980s. We present the first analytical result for the EEC at NLO, which is remarkably simple, and facilitates analytical study of the perturbative structure of the EEC. We provide the expansion of the EEC in the collinear and back-to-back regions through next-to-leading power, information which should aid resummation in these regions.

  1. Comparison of algebraic and analytical approaches to the formulation of the statistical model-based reconstruction problem for X-ray computed tomography.

    Science.gov (United States)

    Cierniak, Robert; Lorent, Anna

    2016-09-01

    The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali; Vishwanath, Venkatram; Kumaran, Kalyan

    2017-01-01

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errors of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.

  3. Noble gas encapsulation into carbon nanotubes: Predictions from analytical model and DFT studies

    Energy Technology Data Exchange (ETDEWEB)

    Balasubramani, Sree Ganesh; Singh, Devendra; Swathi, R. S., E-mail: swathi@iisertvm.ac.in [School of Chemistry, Indian Institute of Science Education and Research Thiruvananthapuram (IISER-TVM), Kerala 695016 (India)

    2014-11-14

    The energetics for the interaction of the noble gas atoms with the carbon nanotubes (CNTs) are investigated using an analytical model and density functional theory calculations. Encapsulation of the noble gas atoms, He, Ne, Ar, Kr, and Xe into CNTs of various chiralities is studied in detail using an analytical model, developed earlier by Hill and co-workers. The constrained motion of the noble gas atoms along the axes of the CNTs as well as the off-axis motion are discussed. Analyses of the forces, interaction energies, acceptance and suction energies for the encapsulation enable us to predict the optimal CNTs that can encapsulate each of the noble gas atoms. We find that CNTs of radii 2.98 − 4.20 Å (chiral indices, (5,4), (6,4), (9,1), (6,6), and (9,3)) can efficiently encapsulate the He, Ne, Ar, Kr, and Xe atoms, respectively. Endohedral adsorption of all the noble gas atoms is preferred over exohedral adsorption on various CNTs. The results obtained using the analytical model are subsequently compared with the calculations performed with the dispersion-including density functional theory at the M06 − 2X level using a triple-zeta basis set and good qualitative agreement is found. The analytical model is however found to be computationally cheap as the equations can be numerically programmed and the results obtained in comparatively very less time.

  4. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhen [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Xia, Changliang [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Yan, Yan, E-mail: yanyan@tju.edu.cn [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Geng, Qiang [Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Shi, Tingna [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2017-08-01

    Highlights: • A hybrid analytical model is developed for field calculation of multilayer IPM machines. • The rotor magnetic field is calculated by the magnetic equivalent circuit method. • The field in the stator and air-gap is calculated by subdomain technique. • The magnetic scalar potential on rotor surface is modeled as trapezoidal distribution. - Abstract: Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff’s law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell’s equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  5. Comparison of analytic models of instability of rarefied gas flow in a channel

    Energy Technology Data Exchange (ETDEWEB)

    Aksenova, Olga A. [St.-Petersburg State University, Department of Mathematics and Mechanics, 198504, Universitetskiy pr., 28, Peterhof, St.-Petersburg (Russian Federation); Khalidov, Iskander A. [St.-Petersburg State Polytechnic University, Department of Mathematics and Mechanics, 195251, Polytechnicheskaya ul., 29, St.-Petersburg (Russian Federation)

    2014-12-09

    Numerical and analytical results are compared concerning the limit properties of the trajectories, attractors and bifurcations of rarefied gas flows in channels. The cascade of bifurcations obtained in our previous analytical and numerical investigations is simulated numerically for different scattering functions V generalizing the ray-diffuse reflection of gas particles from the surface. The main purpose of numerical simulation by Monte Carlo method is the investigation of the properties of different analytic nonlinear dynamic systems corresponding to rarefied gas flow in a channel. The results are compared as well for the models suggested originally by R. N. Miroshin, as for the approximations considered for the first time or for studied in our subsequent papers. Analytical solutions we obtained earlier for the ray reflection which means only one determined velocity of scattered from the walls gas atoms, generally different from the specular reflection. The nonlinear iterative equation describing a rarefied gas flow in a long channel becomes unstable in some regions of corresponding parameters of V (it means the sensitivity to boundary conditions). The values of the parameters are found from analytical approximations in these regions. Numerical results show that the chaotic behavior of the nonlinear dynamic system corresponds to strange attractors and distinguishes clearly from Maxwellian distribution and from the equilibrium on the whole. In the regions of instability (as the dimension of the attractor increases) the search for a corresponding state requires a lot more computation time and a lot of data (the amount of data required increases exponentially with embedding dimension). Therefore the main complication in the computation is reducing as well the computing time as the amount of data to find a suitably close solution. To reduce the computing time our analytical results are applied. Flow conditions satisfying the requirements to the experiment are

  6. The Convergence of High Performance Computing and Large Scale Data Analytics

    Science.gov (United States)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  7. Electron Fermi acceleration in collapsing magnetic traps: Computational and analytical models

    International Nuclear Information System (INIS)

    Gisler, G.; Lemons, D.

    1990-01-01

    The authors consider the heating and acceleration of electrons trapped on magnetic field lines between approaching magnetic mirrors. Such a collapsing magnetic trap and consequent electron energization can occur whenever a curved (or straight) flux tube drifts into a relatively straight (or curved) perpendicular shock. The relativistic, three-dimensional, collisionless test particle simulations show that an initial thermal electron distribution is bulk heated while a few individual electrons are accelerated to many times their original energy before they escape the trap. Upstream field-aligned beams and downstream pancake distributions perpendicular to the field are predicted. In the appropriate limit the simulation results agree well with a nonrelativistic analytic model of the distribution of escaping electrons which is based on the first adiabatic invariant and energy conservation between collisions with the mirrors. Space science and astrophysical applications are discussed

  8. Computational disease modeling – fact or fiction?

    Directory of Open Access Journals (Sweden)

    Stephan Klaas

    2009-06-01

    Full Text Available Abstract Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems.

  9. Enabling analytics on sensitive medical data with secure multi-party computation

    NARCIS (Netherlands)

    M. Veeningen (Meilof); S. Chatterjea (Supriyo); A.Z. Horváth (Anna Zsófia); G. Spindler (Gerald); E. Boersma (Eric); P. van der Spek (Peter); O. van der Galiën (Onno); J. Gutteling (Job); W. Kraaij (Wessel); P.J.M. Veugen (Thijs)

    2018-01-01

    textabstractWhile there is a clear need to apply data analytics in the healthcare sector, this is often difficult because it requires combining sensitive data from multiple data sources. In this paper, we show how the cryptographic technique of secure multiparty computation can enable such data

  10. Analytical effective tensor for flow-through composites

    Science.gov (United States)

    Sviercoski, Rosangela De Fatima [Los Alamos, NM

    2012-06-19

    A machine, method and computer-usable medium for modeling an average flow of a substance through a composite material. Such a modeling includes an analytical calculation of an effective tensor K.sup.a suitable for use with a variety of media. The analytical calculation corresponds to an approximation to the tensor K, and follows by first computing the diagonal values, and then identifying symmetries of the heterogeneity distribution. Additional calculations include determining the center of mass of the heterogeneous cell and its angle according to a defined Cartesian system, and utilizing this angle into a rotation formula to compute the off-diagonal values and determining its sign.

  11. 2D analytical modeling of a wholly superconducting synchronous reluctance motor

    International Nuclear Information System (INIS)

    Male, G; Lubin, T; Mezani, S; Leveque, J

    2011-01-01

    An analytical computation of the magnetic field distribution in a wholly superconducting synchronous reluctance motor is proposed. The stator of the studied motor consists of three-phase HTS armature windings fed by AC currents. The rotor is made with HTS bulks which have a nearly diamagnetic behavior under zero field cooling. The electromagnetic torque is obtained by the interaction between the rotating magnetic field created by the HTS windings and the HTS bulks. The proposed analytical model is based on the resolution of Laplace's and Poisson's equations (by the separation-of-variables technique) for each sub-domain, i.e. stator windings, air-gap, holes between HTS bulks and exterior iron shield. For the study, the HTS bulks are considered as perfect diamagnetic materials. The boundary and continuity conditions between the sub-domains yield to the global solution. Magnetic field distributions and electromagnetic torque obtained by the analytical method are compared with those obtained from finite element analyses.

  12. 2D analytical modeling of a wholly superconducting synchronous reluctance motor

    Energy Technology Data Exchange (ETDEWEB)

    Male, G; Lubin, T; Mezani, S; Leveque, J, E-mail: gael.male@green.uhp-nancy.fr [Groupe de Recherche en Electrotechnique et Electronique de Nancy, Universite Henri Poincare, Faculte des Sciences et Technologies BP 70239, 54506 Vandoeuvre les Nancy CEDEX (France)

    2011-03-15

    An analytical computation of the magnetic field distribution in a wholly superconducting synchronous reluctance motor is proposed. The stator of the studied motor consists of three-phase HTS armature windings fed by AC currents. The rotor is made with HTS bulks which have a nearly diamagnetic behavior under zero field cooling. The electromagnetic torque is obtained by the interaction between the rotating magnetic field created by the HTS windings and the HTS bulks. The proposed analytical model is based on the resolution of Laplace's and Poisson's equations (by the separation-of-variables technique) for each sub-domain, i.e. stator windings, air-gap, holes between HTS bulks and exterior iron shield. For the study, the HTS bulks are considered as perfect diamagnetic materials. The boundary and continuity conditions between the sub-domains yield to the global solution. Magnetic field distributions and electromagnetic torque obtained by the analytical method are compared with those obtained from finite element analyses.

  13. Analytic model of electron pulse propagation in ultrafast electron diffraction experiments

    International Nuclear Information System (INIS)

    Michalik, A.M.; Sipe, J.E.

    2006-01-01

    We present a mean-field analytic model to study the propagation of electron pulses used in ultrafast electron diffraction experiments (UED). We assume a Gaussian form to characterize the electron pulse, and derive a system of ordinary differential equations that are solved quickly and easily to give the pulse dynamics. We compare our model to an N-body numerical simulation and are able to show excellent agreement between the two result sets. This model is a convenient alternative to time consuming and computationally intense N-body simulations in exploring the dynamics of UED electron pulses, and as a tool for refining UED experimental designs

  14. Computer modeling of flow induced in-reactor vibrations

    International Nuclear Information System (INIS)

    Turula, P.; Mulcahy, T.M.

    1977-01-01

    An assessment of the reliability of finite element method computer models, as applied to the computation of flow induced vibration response of components used in nuclear reactors, is presented. The prototype under consideration was the Fast Flux Test Facility reactor being constructed for US-ERDA. Data were available from an extensive test program which used a scale model simulating the hydraulic and structural characteristics of the prototype components, subjected to scaled prototypic flow conditions as well as to laboratory shaker excitations. Corresponding analytical solutions of the component vibration problems were obtained using the NASTRAN computer code. Modal analyses and response analyses were performed. The effect of the surrounding fluid was accounted for. Several possible forcing function definitions were considered. Results indicate that modal computations agree well with experimental data. Response amplitude comparisons are good only under conditions favorable to a clear definition of the structural and hydraulic properties affecting the component motion. 20 refs

  15. Development of an analytical model to assess fuel property effects on combustor performance

    Science.gov (United States)

    Sutton, R. D.; Troth, D. L.; Miles, G. A.; Riddlebaugh, S. M.

    1987-01-01

    A generalized first-order computer model has been developed in order to analytically evaluate the potential effect of alternative fuels' effects on gas turbine combustors. The model assesses the size, configuration, combustion reliability, and durability of the combustors required to meet performance and emission standards while operating on a broad range of fuels. Predictions predicated on combustor flow-field determinations by the model indicate that fuel chemistry, as defined by hydrogen content, exerts a significant influence on flame retardation, liner wall temperature, and smoke emission.

  16. Efficient analytical implementation of the DOT Riemann solver for the de Saint Venant-Exner morphodynamic model

    Science.gov (United States)

    Carraro, F.; Valiani, A.; Caleffi, V.

    2018-03-01

    Within the framework of the de Saint Venant equations coupled with the Exner equation for morphodynamic evolution, this work presents a new efficient implementation of the Dumbser-Osher-Toro (DOT) scheme for non-conservative problems. The DOT path-conservative scheme is a robust upwind method based on a complete Riemann solver, but it has the drawback of requiring expensive numerical computations. Indeed, to compute the non-linear time evolution in each time step, the DOT scheme requires numerical computation of the flux matrix eigenstructure (the totality of eigenvalues and eigenvectors) several times at each cell edge. In this work, an analytical and compact formulation of the eigenstructure for the de Saint Venant-Exner (dSVE) model is introduced and tested in terms of numerical efficiency and stability. Using the original DOT and PRICE-C (a very efficient FORCE-type method) as reference methods, we present a convergence analysis (error against CPU time) to study the performance of the DOT method with our new analytical implementation of eigenstructure calculations (A-DOT). In particular, the numerical performance of the three methods is tested in three test cases: a movable bed Riemann problem with analytical solution; a problem with smooth analytical solution; a test in which the water flow is characterised by subcritical and supercritical regions. For a given target error, the A-DOT method is always the most efficient choice. Finally, two experimental data sets and different transport formulae are considered to test the A-DOT model in more practical case studies.

  17. Cost effectiveness of ovarian reserve testing in in vitro fertilization: a Markov decision-analytic model

    NARCIS (Netherlands)

    Moolenaar, Lobke M.; Broekmans, Frank J. M.; van Disseldorp, Jeroen; Fauser, Bart C. J. M.; Eijkemans, Marinus J. C.; Hompes, Peter G. A.; van der Veen, Fulco; Mol, Ben Willem J.

    2011-01-01

    To compare the cost effectiveness of ovarian reserve testing in in vitro fertilization (IVF). A Markov decision model based on data from the literature and original patient data. Decision analytic framework. Computer-simulated cohort of subfertile women aged 20 to 45 years who are eligible for IVF.

  18. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    Science.gov (United States)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  19. A Semi-Analytical Model for Dispersion Modelling Studies in the Atmospheric Boundary Layer

    Science.gov (United States)

    Gupta, A.; Sharan, M.

    2017-12-01

    The severe impact of harmful air pollutants has always been a cause of concern for a wide variety of air quality analysis. The analytical models based on the solution of the advection-diffusion equation have been the first and remain the convenient way for modeling air pollutant dispersion as it is easy to handle the dispersion parameters and related physics in it. A mathematical model describing the crosswind integrated concentration is presented. The analytical solution to the resulting advection-diffusion equation is limited to a constant and simple profiles of eddy diffusivity and wind speed. In practice, the wind speed depends on the vertical height above the ground and eddy diffusivity profiles on the downwind distance from the source as well as the vertical height. In the present model, a method of eigen-function expansion is used to solve the resulting partial differential equation with the appropriate boundary conditions. This leads to a system of first order ordinary differential equations with a coefficient matrix depending on the downwind distance. The solution of this system, in general, can be expressed in terms of Peano-baker series which is not easy to compute, particularly when the coefficient matrix becomes non-commutative (Martin et al., 1967). An approach based on Taylor's series expansion is introduced to find the numerical solution of first order system. The method is applied to various profiles of wind speed and eddy diffusivities. The solution computed from the proposed methodology is found to be efficient and accurate in comparison to those available in the literature. The performance of the model is evaluated with the diffusion datasets from Copenhagen (Gryning et al., 1987) and Hanford (Doran et al., 1985). In addition, the proposed method is used to deduce three dimensional concentrations by considering the Gaussian distribution in crosswind direction, which is also evaluated with diffusion data corresponding to a continuous point source.

  20. Analytical solution using computer algebra of a biosensor for detecting toxic substances in water

    Science.gov (United States)

    Rúa Taborda, María. Isabel

    2014-05-01

    In a relatively recent paper an electrochemical biosensor for water toxicity detection based on a bio-chip as a whole cell was proposed and numerically solved and analyzed. In such paper the kinetic processes in a miniaturized electrochemical biosensor system was described using the equations for specific enzymatic reaction and the diffusion equation. The numerical solution shown excellent agreement with the measured data but such numerical solution is not enough to design efficiently the corresponding bio-chip. For this reason an analytical solution is demanded. The object of the present work is to provide such analytical solution and then to give algebraic guides to design the bio-sensor. The analytical solution is obtained using computer algebra software, specifically Maple. The method of solution is the Laplace transform, with Bromwich integral and residue theorem. The final solution is given as a series of Bessel functions and the effective time for the bio-sensor is computed. It is claimed that the analytical solutions that were obtained will be very useful to predict further current variations in similar systems with different geometries, materials and biological components. Beside of this the analytical solution that we provide is very useful to investigate the relationship between different chamber parameters such as cell radius and height; and electrode radius.

  1. Application of SLURM, BOINC, and GlusterFS as Software System for Sustainable Modeling and Data Analytics

    Science.gov (United States)

    Kashansky, Vladislav V.; Kaftannikov, Igor L.

    2018-02-01

    Modern numerical modeling experiments and data analytics problems in various fields of science and technology reveal a wide variety of serious requirements for distributed computing systems. Many scientific computing projects sometimes exceed the available resource pool limits, requiring extra scalability and sustainability. In this paper we share the experience and findings of our own on combining the power of SLURM, BOINC and GlusterFS as software system for scientific computing. Especially, we suggest a complete architecture and highlight important aspects of systems integration.

  2. Computer models for fading channels with applications to digital transmission

    Science.gov (United States)

    Loo, Chun; Secord, Norman

    1991-11-01

    The authors describe computer models for Rayleigh, Rician, log-normal, and land-mobile-satellite fading channels. All computer models for the fading channels are based on the manipulation of a white Gaussian random process. This process is approximated by a sum of sinusoids with random phase angle. These models compare very well with analytical models in terms of their probability distribution of envelope and phase of the fading signal. For the land mobile satellite fading channel, results of level crossing rate and average fade duration are given. These results show that the computer models can provide a good coarse estimate of the time statistic of the faded signal. Also, for the land-mobile-satellite fading channel, the results show that a 3-pole Butterworth shaping filter should be used with the model. An example of the application of the land-mobile-satellite fading-channel model to predict the performance of a differential phase-shift keying signal is described.

  3. Stability and Hopf bifurcation for a delayed SLBRS computer virus model.

    Science.gov (United States)

    Zhang, Zizhen; Yang, Huizhong

    2014-01-01

    By incorporating the time delay due to the period that computers use antivirus software to clean the virus into the SLBRS model a delayed SLBRS computer virus model is proposed in this paper. The dynamical behaviors which include local stability and Hopf bifurcation are investigated by regarding the delay as bifurcating parameter. Specially, direction and stability of the Hopf bifurcation are derived by applying the normal form method and center manifold theory. Finally, an illustrative example is also presented to testify our analytical results.

  4. Stability and Hopf Bifurcation for a Delayed SLBRS Computer Virus Model

    Directory of Open Access Journals (Sweden)

    Zizhen Zhang

    2014-01-01

    Full Text Available By incorporating the time delay due to the period that computers use antivirus software to clean the virus into the SLBRS model a delayed SLBRS computer virus model is proposed in this paper. The dynamical behaviors which include local stability and Hopf bifurcation are investigated by regarding the delay as bifurcating parameter. Specially, direction and stability of the Hopf bifurcation are derived by applying the normal form method and center manifold theory. Finally, an illustrative example is also presented to testify our analytical results.

  5. Maxwell: A semi-analytic 4D code for earthquake cycle modeling of transform fault systems

    Science.gov (United States)

    Sandwell, David; Smith-Konter, Bridget

    2018-05-01

    We have developed a semi-analytic approach (and computational code) for rapidly calculating 3D time-dependent deformation and stress caused by screw dislocations imbedded within an elastic layer overlying a Maxwell viscoelastic half-space. The maxwell model is developed in the Fourier domain to exploit the computational advantages of the convolution theorem, hence substantially reducing the computational burden associated with an arbitrarily complex distribution of force couples necessary for fault modeling. The new aspect of this development is the ability to model lateral variations in shear modulus. Ten benchmark examples are provided for testing and verification of the algorithms and code. One final example simulates interseismic deformation along the San Andreas Fault System where lateral variations in shear modulus are included to simulate lateral variations in lithospheric structure.

  6. Numerical and analytical solutions for problems relevant for quantum computers

    International Nuclear Information System (INIS)

    Spoerl, Andreas

    2008-01-01

    Quantum computers are one of the next technological steps in modern computer science. Some of the relevant questions that arise when it comes to the implementation of quantum operations (as building blocks in a quantum algorithm) or the simulation of quantum systems are studied. Numerical results are gathered for variety of systems, e.g. NMR systems, Josephson junctions and others. To study quantum operations (e.g. the quantum fourier transform, swap operations or multiply-controlled NOT operations) on systems containing many qubits, a parallel C++ code was developed and optimised. In addition to performing high quality operations, a closer look was given to the minimal times required to implement certain quantum operations. These times represent an interesting quantity for the experimenter as well as for the mathematician. The former tries to fight dissipative effects with fast implementations, while the latter draws conclusions in the form of analytical solutions. Dissipative effects can even be included in the optimisation. The resulting solutions are relaxation and time optimised. For systems containing 3 linearly coupled spin (1)/(2) qubits, analytical solutions are known for several problems, e.g. indirect Ising couplings and trilinear operations. A further study was made to investigate whether there exists a sufficient set of criteria to identify systems with dynamics which are invertible under local operations. Finally, a full quantum algorithm to distinguish between two knots was implemented on a spin(1)/(2) system. All operations for this experiment were calculated analytically. The experimental results coincide with the theoretical expectations. (orig.)

  7. Two dimensional analytical model for a reconfigurable field effect transistor

    Science.gov (United States)

    Ranjith, R.; Jayachandran, Remya; Suja, K. J.; Komaragiri, Rama S.

    2018-02-01

    This paper presents two-dimensional potential and current models for a reconfigurable field effect transistor (RFET). Two potential models which describe subthreshold and above-threshold channel potentials are developed by solving two-dimensional (2D) Poisson's equation. In the first potential model, 2D Poisson's equation is solved by considering constant/zero charge density in the channel region of the device to get the subthreshold potential characteristics. In the second model, accumulation charge density is considered to get above-threshold potential characteristics of the device. The proposed models are applicable for the device having lightly doped or intrinsic channel. While obtaining the mathematical model, whole body area is divided into two regions: gated region and un-gated region. The analytical models are compared with technology computer-aided design (TCAD) simulation results and are in complete agreement for different lengths of the gated regions as well as at various supply voltage levels.

  8. Recent developments in computer vision-based analytical chemistry: A tutorial review.

    Science.gov (United States)

    Capitán-Vallvey, Luis Fermín; López-Ruiz, Nuria; Martínez-Olmos, Antonio; Erenas, Miguel M; Palma, Alberto J

    2015-10-29

    Chemical analysis based on colour changes recorded with imaging devices is gaining increasing interest. This is due to its several significant advantages, such as simplicity of use, and the fact that it is easily combinable with portable and widely distributed imaging devices, resulting in friendly analytical procedures in many areas that demand out-of-lab applications for in situ and real-time monitoring. This tutorial review covers computer vision-based analytical (CVAC) procedures and systems from 2005 to 2015, a period of time when 87.5% of the papers on this topic were published. The background regarding colour spaces and recent analytical system architectures of interest in analytical chemistry is presented in the form of a tutorial. Moreover, issues regarding images, such as the influence of illuminants, and the most relevant techniques for processing and analysing digital images are addressed. Some of the most relevant applications are then detailed, highlighting their main characteristics. Finally, our opinion about future perspectives is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  10. Effective modelling for predictive analytics in data science ...

    African Journals Online (AJOL)

    Effective modelling for predictive analytics in data science. ... the nearabsence of empirical or factual predictive analytics in the mainstream research going on ... Keywords: Predictive Analytics, Big Data, Business Intelligence, Project Planning.

  11. Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models

    Science.gov (United States)

    Parke, F. I.

    1981-01-01

    Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.

  12. A Novel Computer Virus Propagation Model under Security Classification

    Directory of Open Access Journals (Sweden)

    Qingyi Zhu

    2017-01-01

    Full Text Available In reality, some computers have specific security classification. For the sake of safety and cost, the security level of computers will be upgraded with increasing of threats in networks. Here we assume that there exists a threshold value which determines when countermeasures should be taken to level up the security of a fraction of computers with low security level. And in some specific realistic environments the propagation network can be regarded as fully interconnected. Inspired by these facts, this paper presents a novel computer virus dynamics model considering the impact brought by security classification in full interconnection network. By using the theory of dynamic stability, the existence of equilibria and stability conditions is analysed and proved. And the above optimal threshold value is given analytically. Then, some numerical experiments are made to justify the model. Besides, some discussions and antivirus measures are given.

  13. User Behavior Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Turcotte, Melissa [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Moore, Juston Shane [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-28

    User Behaviour Analytics is the tracking, collecting and assessing of user data and activities. The goal is to detect misuse of user credentials by developing models for the normal behaviour of user credentials within a computer network and detect outliers with respect to their baseline.

  14. Analytical model for screening potential CO2 repositories

    Science.gov (United States)

    Okwen, R.T.; Stewart, M.T.; Cunningham, J.A.

    2011-01-01

    Assessing potential repositories for geologic sequestration of carbon dioxide using numerical models can be complicated, costly, and time-consuming, especially when faced with the challenge of selecting a repository from a multitude of potential repositories. This paper presents a set of simple analytical equations (model), based on the work of previous researchers, that could be used to evaluate the suitability of candidate repositories for subsurface sequestration of carbon dioxide. We considered the injection of carbon dioxide at a constant rate into a confined saline aquifer via a fully perforated vertical injection well. The validity of the analytical model was assessed via comparison with the TOUGH2 numerical model. The metrics used in comparing the two models include (1) spatial variations in formation pressure and (2) vertically integrated brine saturation profile. The analytical model and TOUGH2 show excellent agreement in their results when similar input conditions and assumptions are applied in both. The analytical model neglects capillary pressure and the pressure dependence of fluid properties. However, simulations in TOUGH2 indicate that little error is introduced by these simplifications. Sensitivity studies indicate that the agreement between the analytical model and TOUGH2 depends strongly on (1) the residual brine saturation, (2) the difference in density between carbon dioxide and resident brine (buoyancy), and (3) the relationship between relative permeability and brine saturation. The results achieved suggest that the analytical model is valid when the relationship between relative permeability and brine saturation is linear or quasi-linear and when the irreducible saturation of brine is zero or very small. ?? 2011 Springer Science+Business Media B.V.

  15. Study on dynamic characteristics of reduced analytical model for PWR reactor internal structures

    International Nuclear Information System (INIS)

    Yoo, Bong; Lee, Jae Han; Kim, Jong Bum; Koo, Kyeong Hoe

    1993-01-01

    The objective of this study is to establish the procedure of the reduced analytical modeling technique for the PWR reactor internal(RI) structures and to carry out the sensitivity study of the dynamic characteristics of the structures by varying the structural parameters such as the stiffness, the mass and the damping. Modeling techniques for the PWR reactor internal structures and computer programs used for the dynamic analysis of the reactor internal structures are briefly investigated. Among the many components of RI structures, the dynamic characteristics for CSB was performed. The sensitivity analysis of the dynamic characteristics for the reduced analytical model considering the variations of the stiffnesses for the lower and upper flanges of the CSB and for the RV Snubber were performed to improve the dynamic characteristics of the RI structures against the external loadings given. In order to enhance the structural design margin of the RI components, the nonlinear time history analyses were attempted for the RI reduced models to compare the structural responses between the reference model and the modified one. (Author)

  16. On Designing a Generic Framework for Cloud-based Big Data Analytics

    OpenAIRE

    Khan, Samiya; Alam, Mansaf

    2017-01-01

    Big data analytics has gathered immense research attention lately because of its ability to harness useful information from heaps of data. Cloud computing has been adjudged as one of the best infrastructural solutions for implementation of big data analytics. This research paper proposes a five-layer model for cloud-based big data analytics that uses dew computing and edge computing concepts. Besides this, the paper also presents an approach for creation of custom big data stack by selecting ...

  17. Learning Analytics: The next frontier for computer assisted language learning in big data age

    Directory of Open Access Journals (Sweden)

    Yu Qinglan

    2015-01-01

    Full Text Available Learning analytics (LA has been applied to various learning environments, though it is quite new in the field of computer assisted language learning (CALL. This article attempts to examine the application of learning analytics in the upcoming big data age. It starts with an introduction and application of learning analytics in other fields, followed by a retrospective review of historical interaction between learning and media in CALL, and a penetrating analysis on why people would go to learning analytics to increase the efficiency of foreign language education. As approved in previous research, new technology, including big data mining and analysis, would inevitably enhance the learning of foreign languages. Potential changes that learning analytics would bring to Chinese foreign language education and researches are also presented in the article.

  18. Analytical and grid-free solutions to the Lighthill-Whitham-Richards traffic flow model

    KAUST Repository

    Mazaré , Pierre Emmanuel; Dehwah, Ahmad H.; Claudel, Christian G.; Bayen, Alexandre M.

    2011-01-01

    In this article, we propose a computational method for solving the Lighthill-Whitham-Richards (LWR) partial differential equation (PDE) semi-analytically for arbitrary piecewise-constant initial and boundary conditions, and for arbitrary concave fundamental diagrams. With these assumptions, we show that the solution to the LWR PDE at any location and time can be computed exactly and semi-analytically for a very low computational cost using the cumulative number of vehicles formulation of the problem. We implement the proposed computational method on a representative traffic flow scenario to illustrate the exactness of the analytical solution. We also show that the proposed scheme can handle more complex scenarios including traffic lights or moving bottlenecks. The computational cost of the method is very favorable, and is compared with existing algorithms. A toolbox implementation available for public download is briefly described, and posted at http://traffic.berkeley.edu/project/downloads/lwrsolver. © 2011 Elsevier Ltd.

  19. Analytical and grid-free solutions to the Lighthill-Whitham-Richards traffic flow model

    KAUST Repository

    Mazaré, Pierre Emmanuel

    2011-12-01

    In this article, we propose a computational method for solving the Lighthill-Whitham-Richards (LWR) partial differential equation (PDE) semi-analytically for arbitrary piecewise-constant initial and boundary conditions, and for arbitrary concave fundamental diagrams. With these assumptions, we show that the solution to the LWR PDE at any location and time can be computed exactly and semi-analytically for a very low computational cost using the cumulative number of vehicles formulation of the problem. We implement the proposed computational method on a representative traffic flow scenario to illustrate the exactness of the analytical solution. We also show that the proposed scheme can handle more complex scenarios including traffic lights or moving bottlenecks. The computational cost of the method is very favorable, and is compared with existing algorithms. A toolbox implementation available for public download is briefly described, and posted at http://traffic.berkeley.edu/project/downloads/lwrsolver. © 2011 Elsevier Ltd.

  20. Evaluating the efficiency of divestiture policy in promoting competitiveness using an analytical method and agent-based computational economics

    International Nuclear Information System (INIS)

    Rahimiyan, Morteza; Rajabi Mashhadi, Habib

    2010-01-01

    Choosing a desired policy for divestiture of dominant firms' generation assets has been a challenging task and open question for regulatory authority. To deal with this problem, in this paper, an analytical method and agent-based computational economics (ACE) approach are used for ex-ante analysis of divestiture policy in reducing market power. The analytical method is applied to solve a designed concentration boundary problem, even for situations where the cost data of generators are unknown. The concentration boundary problem is the problem of minimizing or maximizing market concentration subject to operation constraints of the electricity market. It is proved here that the market concentration corresponding to operation condition is certainly viable in an interval calculated by the analytical method. For situations where the cost function of generators is available, the ACE is used to model the electricity market. In ACE, each power producer's profit-maximization problem is solved by the computational approach of Q-learning. The power producer using the Q-learning method learns from past experiences to implicitly identify the market power, and find desired response in competing with the rivals. Both methods are applied in a multi-area power system and effects of different divestiture policies on market behavior are analyzed.

  1. Evaluating the efficiency of divestiture policy in promoting competitiveness using an analytical method and agent-based computational economics

    Energy Technology Data Exchange (ETDEWEB)

    Rahimiyan, Morteza; Rajabi Mashhadi, Habib [Department of Electrical Engineering, Faculty of Engineering, Ferdowsi University of Mashhad, Mashhad (Iran)

    2010-03-15

    Choosing a desired policy for divestiture of dominant firms' generation assets has been a challenging task and open question for regulatory authority. To deal with this problem, in this paper, an analytical method and agent-based computational economics (ACE) approach are used for ex-ante analysis of divestiture policy in reducing market power. The analytical method is applied to solve a designed concentration boundary problem, even for situations where the cost data of generators are unknown. The concentration boundary problem is the problem of minimizing or maximizing market concentration subject to operation constraints of the electricity market. It is proved here that the market concentration corresponding to operation condition is certainly viable in an interval calculated by the analytical method. For situations where the cost function of generators is available, the ACE is used to model the electricity market. In ACE, each power producer's profit-maximization problem is solved by the computational approach of Q-learning. The power producer using the Q-learning method learns from past experiences to implicitly identify the market power, and find desired response in competing with the rivals. Both methods are applied in a multi-area power system and effects of different divestiture policies on market behavior are analyzed. (author)

  2. An analytical method for computing atomic contact areas in biomolecules.

    Science.gov (United States)

    Mach, Paul; Koehl, Patrice

    2013-01-15

    We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.

  3. Some questions of using coding theory and analytical calculation methods on computers

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1987-01-01

    Main results of investigations devoted to the application of theory and practice of correcting codes are presented. These results are used to create very fast units for the selection of events registered in multichannel detectors of nuclear particles. Using this theory and analytical computing calculations, practically new combination devices, for example, parallel encoders, have been developed. Questions concerning the creation of a new algorithm for the calculation of digital functions by computers and problems of devising universal, dynamically reprogrammable logic modules are discussed

  4. A Study of Analytical Solution for the Special Dissolution Rate Model of Rock Salt

    Directory of Open Access Journals (Sweden)

    Xin Yang

    2017-01-01

    Full Text Available By calculating the concentration distributions of rock salt solutions at the boundary layer, an ordinary differential equation for describing a special dissolution rate model of rock salt under the assumption of an instantaneous diffusion process was established to investigate the dissolution mechanism of rock salt under transient but stable conditions. The ordinary differential equation was then solved mathematically to give an analytical solution and related expressions for the dissolved radius and solution concentration. Thereafter, the analytical solution was fitted with transient dissolution test data of rock salt to provide the dissolution parameters at different flow rates, and the physical meaning of the analytical formula was also discussed. Finally, the influential factors of the analytical formula were investigated. There was approximately a linear relationship between the dissolution parameters and the flow rate. The effects of the dissolution area and initial volume of the solution on the dissolution rate equation of rock salt were computationally investigated. The results showed that the present analytical solution gives a good description of the dissolution mechanism of rock salt under some special conditions, which may provide a primary theoretical basis and an analytical way to investigate the dissolution characteristics of rock salt.

  5. Analytical model for macromolecular partitioning during yeast cell division

    International Nuclear Information System (INIS)

    Kinkhabwala, Ali; Khmelinskii, Anton; Knop, Michael

    2014-01-01

    Asymmetric cell division, whereby a parent cell generates two sibling cells with unequal content and thereby distinct fates, is central to cell differentiation, organism development and ageing. Unequal partitioning of the macromolecular content of the parent cell — which includes proteins, DNA, RNA, large proteinaceous assemblies and organelles — can be achieved by both passive (e.g. diffusion, localized retention sites) and active (e.g. motor-driven transport) processes operating in the presence of external polarity cues, internal asymmetries, spontaneous symmetry breaking, or stochastic effects. However, the quantitative contribution of different processes to the partitioning of macromolecular content is difficult to evaluate. Here we developed an analytical model that allows rapid quantitative assessment of partitioning as a function of various parameters in the budding yeast Saccharomyces cerevisiae. This model exposes quantitative degeneracies among the physical parameters that govern macromolecular partitioning, and reveals regions of the solution space where diffusion is sufficient to drive asymmetric partitioning and regions where asymmetric partitioning can only be achieved through additional processes such as motor-driven transport. Application of the model to different macromolecular assemblies suggests that partitioning of protein aggregates and episomes, but not prions, is diffusion-limited in yeast, consistent with previous reports. In contrast to computationally intensive stochastic simulations of particular scenarios, our analytical model provides an efficient and comprehensive overview of partitioning as a function of global and macromolecule-specific parameters. Identification of quantitative degeneracies among these parameters highlights the importance of their careful measurement for a given macromolecular species in order to understand the dominant processes responsible for its observed partitioning

  6. Analytical Model-Based Design Optimization of a Transverse Flux Machine

    Energy Technology Data Exchange (ETDEWEB)

    Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz; Husain, Iqbal; Muljadi, Eduard

    2017-02-16

    This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variables that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.

  7. Multi-objective analytical model for optimal sizing of stand-alone photovoltaic water pumping systems

    International Nuclear Information System (INIS)

    Olcan, Ceyda

    2015-01-01

    Highlights: • An analytical optimal sizing model is proposed for PV water pumping systems. • The objectives are chosen as deficiency of power supply and life-cycle costs. • The crop water requirements are estimated for a citrus tree yard in Antalya. • The optimal tilt angles are calculated for fixed, seasonal and monthly changes. • The sizing results showed the validity of the proposed analytical model. - Abstract: Stand-alone photovoltaic (PV) water pumping systems effectively use solar energy for irrigation purposes in remote areas. However the random variability and unpredictability of solar energy makes difficult the penetration of PV implementations and complicate the system design. An optimal sizing of these systems proves to be essential. This paper recommends a techno-economic optimization model to determine optimally the capacity of the components of PV water pumping system using a water storage tank. The proposed model is developed regarding the reliability and cost indicators, which are the deficiency of power supply probability and life-cycle costs, respectively. The novelty is that the proposed optimization model is analytically defined for two-objectives and it is able to find a compromise solution. The sizing of a stand-alone PV water pumping system comprises a detailed analysis of crop water requirements and optimal tilt angles. Besides the necessity of long solar radiation and temperature time series, the accurate forecasts of water supply needs have to be determined. The calculation of the optimal tilt angle for yearly, seasonally and monthly frequencies results in higher system efficiency. It is, therefore, suggested to change regularly the tilt angle in order to maximize solar energy output. The proposed optimal sizing model incorporates all these improvements and can accomplish a comprehensive optimization of PV water pumping systems. A case study is conducted considering the irrigation of citrus trees yard located in Antalya, Turkey

  8. Modeling and analytical simulation of a smouldering carbonaceous ...

    African Journals Online (AJOL)

    Modeling and analytical simulation of a smouldering carbonaceous rod. A.A. Mohammed, R.O. Olayiwola, M Eseyin, A.A. Wachin. Abstract. Modeling of pyrolysis and combustion in a smouldering fuel bed requires the solution of flow, heat and mass transfer through porous media. This paper presents an analytical method ...

  9. Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments

    Science.gov (United States)

    Lane, Peter C. R.; Gobet, Fernand

    2013-03-01

    Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the `speciated non-dominated sorting genetic algorithm' for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.

  10. Improved steamflood analytical model

    Energy Technology Data Exchange (ETDEWEB)

    Chandra, S.; Mamora, D.D. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Texas A and M Univ., TX (United States)

    2005-11-01

    Predicting the performance of steam flooding can help in the proper execution of enhanced oil recovery (EOR) processes. The Jones model is often used for analytical steam flooding performance prediction, but it does not accurately predict oil production peaks. In this study, an improved steam flood model was developed by modifying 2 of the 3 components of the capture factor in the Jones model. The modifications were based on simulation results from a Society of Petroleum Engineers (SPE) comparative project case model. The production performance of a 5-spot steamflood pattern unit was simulated and compared with results obtained from the Jones model. Three reservoir types were simulated through the use of 3-D Cartesian black oil models. In order to correlate the simulation and the Jones analytical model results for the start and height of the production peak, the dimensionless steam zone size was modified to account for a decrease in oil viscosity during steam flooding and its dependence on the steam injection rate. In addition, the dimensionless volume of displaced oil produced was modified from its square-root format to an exponential form. The modified model improved results for production performance by up to 20 years of simulated steam flooding, compared to the Jones model. Results agreed with simulation results for 13 different cases, including 3 different sets of reservoir and fluid properties. Reservoir engineers will benefit from the improved accuracy of the model. Oil displacement calculations were based on methods proposed in earlier research, in which the oil displacement rate is a function of cumulative oil steam ratio. The cumulative oil steam ratio is a function of overall thermal efficiency. Capture factor component formulae were presented, as well as charts of oil production rates and cumulative oil-steam ratios for various reservoirs. 13 refs., 4 tabs., 29 figs.

  11. Computer simulation of the martensite transformation in a model two-dimensional body

    International Nuclear Information System (INIS)

    Chen, S.; Khachaturyan, A.G.; Morris, J.W. Jr.

    1979-05-01

    An analytical model of a martensitic transformation in an idealized body is constructed and used to carry out a computer simulation of the transformation in a pseudo-two-dimensional crystal. The reaction is assumed to proceed through the sequential transformation of elementary volumes (elementary martensitic particles, EMP) via the Bain strain. The elastic interaction between these volumes is computed and the transformation path chosen so as to minimize the total free energy. The model transformation shows interesting qualitative correspondencies with the known features of martensitic transformations in typical solids

  12. Computer simulation of the martensite transformation in a model two-dimensional body

    International Nuclear Information System (INIS)

    Chen, S.; Khachaturyan, A.G.; Morris, J.W. Jr.

    1979-06-01

    An analytical model of a martensitic transformation in an idealized body is constructed and used to carry out a computer simulation of the transformation in a pseudo-two-dimensional crystal. The reaction is assumed to proceed through the sequential transformation of elementary volumes (elementary martensitic particles, EMP) via the Bain strain. The elastic interaction between these volumes is computed and the transformation path chosen so as to minimize the total free energy. The model transformation shows interesting qualitative correspondencies with the known features of martensitic transformations in typical solids

  13. Polarimetric and angular light-scattering from dense media: Comparison of a vectorial radiative transfer model with analytical, stochastic and experimental approaches

    International Nuclear Information System (INIS)

    Riviere, Nicolas; Ceolato, Romain; Hespel, Laurent

    2013-01-01

    Our work presents computations via a vectorial radiative transfer model of the polarimetric and angular light scattered by a stratified dense medium with small and intermediate optical thickness. We report the validation of this model using analytical results and different computational methods like stochastic algorithms. Moreover, we check the model with experimental data from a specific scatterometer developed at the Onera. The advantages and disadvantages of a radiative approach are discussed. This paper represents a step toward the characterization of particles in dense media involving multiple scattering. -- Highlights: • A vectorial radiative transfer model to simulate the light scattered by stratified layers is developed. • The vectorial radiative transfer equation is solved using an adding–doubling technique. • The results are compared to analytical and stochastic data. • Validation with experimental data from a scatterometer developed at Onera is presented

  14. Simplified analytical model to simulate radionuclide release from radioactive waste trenches

    International Nuclear Information System (INIS)

    Sa, Bernardete Lemes Vieira de

    2001-01-01

    In order to evaluate postclosure off-site doses from low-level radioactive waste disposal facilities, a computer code was developed to simulate the radionuclide released from waste form, transport through vadose zone and transport in the saturated zone. This paper describes the methodology used to model these process. The radionuclide released from the waste is calculated using a model based on first order kinetics and the transport through porous media was determined using semi-analytical solution of the mass transport equation, considering the limiting case of unidirectional convective transport with three-dimensional dispersion in an isotropic medium. The results obtained in this work were compared with other codes, showing good agreement. (author)

  15. Computer modelling of eddy current probes

    International Nuclear Information System (INIS)

    Sullivan, S.P.

    1992-01-01

    Computer programs have been developed for modelling impedance and transmit-receive eddy current probes in two-dimensional axis-symmetric configurations. These programs, which are based on analytic equations, simulate bobbin probes in infinitely long tubes and surface probes on plates. They calculate probe signal due to uniform variations in conductor thickness, resistivity and permeability. These signals depend on probe design and frequency. A finite element numerical program has been procured to calculate magnetic permeability in non-linear ferromagnetic materials. Permeability values from these calculations can be incorporated into the above analytic programs to predict signals from eddy current probes with permanent magnets in ferromagnetic tubes. These programs were used to test various probe designs for new testing applications. Measurements of magnetic permeability in magnetically biased ferromagnetic materials have been performed by superimposing experimental signals, from special laboratory ET probes, on impedance plane diagrams calculated using these programs. (author). 3 refs., 2 figs

  16. Computationally efficient thermal-mechanical modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is anticipated to be instrumental for understanding and predicting the development of residual stress field during the build process. However, SLM process modelling requires determination of the heat transients within the part being built which is coupled to a mechanical boundary value problem to calculate displacement and residual stress fields. Thermal models associated with SLM are typically complex and computationally demanding. In this paper, we present a simple semi-analytical thermal-mechanical model, developed for SLM that represents the effect of laser scanning vectors with line heat sources. The temperature field within the part being build is attained by superposition of temperature field associated with line heat sources in a semi-infinite medium and a complimentary temperature field which accounts for the actual boundary conditions. An analytical solution of a line heat source in a semi-infinite medium is first described followed by the numerical procedure used for finding the complimentary temperature field. This analytical description of the line heat sources is able to capture the steep temperature gradients in the vicinity of the laser spot which is typically tens of micrometers. In turn, semi-analytical thermal model allows for having a relatively coarse discretisation of the complimentary temperature field. The temperature history determined is used to calculate the thermal strain induced on the SLM part. Finally, a mechanical model governed by elastic-plastic constitutive rule having isotropic hardening is used to predict the residual stresses.

  17. Computational model of gamma irradiation room at ININ

    Science.gov (United States)

    Rodríguez-Romo, Suemi; Patlan-Cardoso, Fernando; Ibáñez-Orozco, Oscar; Vergara Martínez, Francisco Javier

    2018-03-01

    In this paper, we present a model of the gamma irradiation room at the National Institute of Nuclear Research (ININ is its acronym in Spanish) in Mexico to improve the use of physics in dosimetry for human protection. We deal with air-filled ionization chambers and scientific computing made in house and framed in both the GEANT4 scheme and our analytical approach to characterize the irradiation room. This room is the only secondary dosimetry facility in Mexico. Our aim is to optimize its experimental designs, facilities, and industrial applications of physical radiation. The computational results provided by our model are supported by all the known experimental data regarding the performance of the ININ gamma irradiation room and allow us to predict the values of the main variables related to this fully enclosed space to within an acceptable margin of error.

  18. Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data.

    Science.gov (United States)

    Dinov, Ivo D

    2016-01-01

    Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.

  19. Analytical models for low-power rectenna design

    NARCIS (Netherlands)

    Akkermans, J.A.G.; Beurden, van M.C.; Doodeman, G.J.N.; Visser, H.J.

    2005-01-01

    The design of a low-cost rectenna for low-power applications is presented. The rectenna is designed with the use of analytical models and closed-form analytical expressions. This allows for a fast design of the rectenna system. To acquire a small-area rectenna, a layered design is proposed.

  20. Analytically solvable models of reaction-diffusion systems

    Energy Technology Data Exchange (ETDEWEB)

    Zemskov, E P; Kassner, K [Institut fuer Theoretische Physik, Otto-von-Guericke-Universitaet, Universitaetsplatz 2, 39106 Magdeburg (Germany)

    2004-05-01

    We consider a class of analytically solvable models of reaction-diffusion systems. An analytical treatment is possible because the nonlinear reaction term is approximated by a piecewise linear function. As particular examples we choose front and pulse solutions to illustrate the matching procedure in the one-dimensional case.

  1. Statistically qualified neuro-analytic failure detection method and system

    Science.gov (United States)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  2. An analytical model for computation of reliability of waste management facilities with intermediate storages

    International Nuclear Information System (INIS)

    Kallweit, A.; Schumacher, F.

    1977-01-01

    A high reliability is called for waste management facilities within the fuel cycle of nuclear power stations which can be fulfilled by providing intermediate storage facilities and reserve capacities. In this report a model based on the theory of Markov processes is described which allows computation of reliability characteristics of waste management facilities containing intermediate storage facilities. The application of the model is demonstrated by an example. (orig.) [de

  3. Plastic deformation of crystals: analytical and computer simulation studies of dislocation glide

    International Nuclear Information System (INIS)

    Altintas, S.

    1978-05-01

    The plastic deformation of crystals is usually accomplished through the motion of dislocations. The glide of a dislocation is impelled by the applied stress and opposed by microstructural defects such as point defects, voids, precipitates and other dislocations. The planar glide of a dislocation through randomly distributed obstacles is considered. The objective of the present research work is to calculate the critical resolved shear stress (CRSS) for athermal glide and the velocity of the dislocation at finite temperature as a function of the applied stress and the nature and strength of the obstacles. Dislocation glide through mixtures of obstacles has been studied analytically and by computer simulation. Arrays containing two kinds of obstacles as well as square distribution of obstacle strengths are considered. The critical resolved shear stress for an array containing obstacles with a given distribution of strengths is calculated using the sum of the quadratic mean of the stresses for the individual obstacles and is found to be in good agreement with the computer simulation data. Computer simulation of dislocation glide through randomly distributed obstacles containing up to 10 6 obstacles show that the CRSS decreases as the size of the array increases and approaches a limiting value. Histograms of forces and of segment lengths are obtained and compared with theoretical predictions. Effects of array shape and boundary conditions on the dislocation glide are also studied. Analytical and computer simulation results are compared with experimental results obtained on precipitation-, irradiation-, forest-, and impurity cluster-hardening systems and are found to be in good agreement

  4. Plastic deformation of crystals: analytical and computer simulation studies of dislocation glide

    Energy Technology Data Exchange (ETDEWEB)

    Altintas, S.

    1978-05-01

    The plastic deformation of crystals is usually accomplished through the motion of dislocations. The glide of a dislocation is impelled by the applied stress and opposed by microstructural defects such as point defects, voids, precipitates and other dislocations. The planar glide of a dislocation through randomly distributed obstacles is considered. The objective of the present research work is to calculate the critical resolved shear stress (CRSS) for athermal glide and the velocity of the dislocation at finite temperature as a function of the applied stress and the nature and strength of the obstacles. Dislocation glide through mixtures of obstacles has been studied analytically and by computer simulation. Arrays containing two kinds of obstacles as well as square distribution of obstacle strengths are considered. The critical resolved shear stress for an array containing obstacles with a given distribution of strengths is calculated using the sum of the quadratic mean of the stresses for the individual obstacles and is found to be in good agreement with the computer simulation data. Computer simulation of dislocation glide through randomly distributed obstacles containing up to 10/sup 6/ obstacles show that the CRSS decreases as the size of the array increases and approaches a limiting value. Histograms of forces and of segment lengths are obtained and compared with theoretical predictions. Effects of array shape and boundary conditions on the dislocation glide are also studied. Analytical and computer simulation results are compared with experimental results obtained on precipitation-, irradiation-, forest-, and impurity cluster-hardening systems and are found to be in good agreement.

  5. An analytical and experimental investigation of natural circulation transients in a model pressurized water reactor

    International Nuclear Information System (INIS)

    Massoud, M.

    1987-01-01

    Natural Circulation phenomena in a simulated PWR was investigated experimentally and analytically. The experimental investigation included determination of system characteristics as well as system response to the imposed transient under symmetric and asymmetric operations. System characteristics were used to obtain correlation for heat transfer coefficient in heat exchangers, system flow resistance, and system buoyancy heat. Asymmetric transients were imposed to study flow oscillation and possible instability. The analytical investigation encompassed development of mathematical model for single-phase, steady-state and transient natural circulation as well as modification of existing model for two-phase flow analysis of phenomena such as small break LOCA, high pressure coolant injection and pump coast down. The developed mathematical model for single-phase analysis was computer coded to simulate the imposed transients. The computer program, entitled ''Symmetric and Asymmetric Analysis of Single-Phase Flow (SAS),'' were employed to simulate the imposed transients. It closely emulated the system behavior throughout the transient and subsequent steady-state. Modifications for two-phase flow analysis included addition of models for once-through steam generator and electric heater rods. Both programs are faster than real time. Off-line, they can be used for prediction and training applications while on-line they serve for simulation and signal validation. The programs can also be used to determine the sensitivity of natural circulation behavior to variation of inputs such as secondary distribution and power transients

  6. Developing semi-analytical solution for multiple-zone transient storage model with spatially non-uniform storage

    Science.gov (United States)

    Deng, Baoqing; Si, Yinbing; Wang, Jia

    2017-12-01

    Transient storages may vary along the stream due to stream hydraulic conditions and the characteristics of storage. Analytical solutions of transient storage models in literature didn't cover the spatially non-uniform storage. A novel integral transform strategy is presented that simultaneously performs integral transforms to the concentrations in the stream and in storage zones by using the single set of eigenfunctions derived from the advection-diffusion equation of the stream. The semi-analytical solution of the multiple-zone transient storage model with the spatially non-uniform storage is obtained by applying the generalized integral transform technique to all partial differential equations in the multiple-zone transient storage model. The derived semi-analytical solution is validated against the field data in literature. Good agreement between the computed data and the field data is obtained. Some illustrative examples are formulated to demonstrate the applications of the present solution. It is shown that solute transport can be greatly affected by the variation of mass exchange coefficient and the ratio of cross-sectional areas. When the ratio of cross-sectional areas is big or the mass exchange coefficient is small, more reaches are recommended to calibrate the parameter.

  7. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    International Nuclear Information System (INIS)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila; Lambas, Diego García; Cora, Sofía A.; Martínez, Cristian A. Vega-; Gargiulo, Ignacio D.; Padilla, Nelson D.; Tecce, Tomás E.; Orsi, Álvaro; Arancibia, Alejandra M. Muñoz

    2015-01-01

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observed galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs

  8. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila; Lambas, Diego García [Instituto de Astronomía Teórica y Experimental, CONICET-UNC, Laprida 854, X5000BGR, Córdoba (Argentina); Cora, Sofía A.; Martínez, Cristian A. Vega-; Gargiulo, Ignacio D. [Consejo Nacional de Investigaciones Científicas y Técnicas, Rivadavia 1917, C1033AAJ Buenos Aires (Argentina); Padilla, Nelson D.; Tecce, Tomás E.; Orsi, Álvaro; Arancibia, Alejandra M. Muñoz, E-mail: andresnicolas@oac.uncor.edu [Instituto de Astrofísica, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, Santiago (Chile)

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observed galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.

  9. Kernel method for air quality modelling. II. Comparison with analytic solutions

    Energy Technology Data Exchange (ETDEWEB)

    Lorimer, G S; Ross, D G

    1986-01-01

    The performance of Lorimer's (1986) kernel method for solving the advection-diffusion equation is tested for instantaneous and continuous emissions into a variety of model atmospheres. Analytical solutions are available for comparison in each case. The results indicate that a modest minicomputer is quite adequate for obtaining satisfactory precision even for the most trying test performed here, which involves a diffusivity tensor and wind speed which are nonlinear functions of the height above ground. Simulations of the same cases by the particle-in-cell technique are found to provide substantially lower accuracy even when use is made of greater computer resources.

  10. Computer-generated movies as an analytic tool

    International Nuclear Information System (INIS)

    Elliott, R.L.

    1978-01-01

    One of the problems faced by the users of large, sophisticated modeling programs at the Los Alamos Scientific Laboratory (LASL) is the analysis of the results of their calculations. One of the more productive and frequently spectacular methods is the production of computer-generated movies. An overview of the generation of computer movies at LASL is presented. The hardware, software, and generation techniques are briefly discussed

  11. 33 CFR 385.33 - Revisions to models and analytical tools.

    Science.gov (United States)

    2010-07-01

    ... on a case-by-case basis what documentation is appropriate for revisions to models and analytic tools... analytical tools. 385.33 Section 385.33 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE... Incorporating New Information Into the Plan § 385.33 Revisions to models and analytical tools. (a) In carrying...

  12. The European computer model for optronic system performance prediction (ECOMOS)

    Science.gov (United States)

    Keßler, Stefan; Bijl, Piet; Labarre, Luc; Repasi, Endre; Wittenstein, Wolfgang; Bürsing, Helge

    2017-10-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defence and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses and combines well-accepted existing European tools to build up a strong competitive position. This includes two TA models: the analytical TRM4 model and the image-based TOD model. In addition, it uses the atmosphere model MATISSE. In this paper, the central idea of ECOMOS is exposed. The overall software structure and the underlying models are shown and elucidated. The status of the project development is given as well as a short discussion of validation tests and an outlook on the future potential of simulation for sensor assessment.

  13. Arc4nix: A cross-platform geospatial analytical library for cluster and cloud computing

    Science.gov (United States)

    Tang, Jingyin; Matyas, Corene J.

    2018-02-01

    Big Data in geospatial technology is a grand challenge for processing capacity. The ability to use a GIS for geospatial analysis on Cloud Computing and High Performance Computing (HPC) clusters has emerged as a new approach to provide feasible solutions. However, users lack the ability to migrate existing research tools to a Cloud Computing or HPC-based environment because of the incompatibility of the market-dominating ArcGIS software stack and Linux operating system. This manuscript details a cross-platform geospatial library "arc4nix" to bridge this gap. Arc4nix provides an application programming interface compatible with ArcGIS and its Python library "arcpy". Arc4nix uses a decoupled client-server architecture that permits geospatial analytical functions to run on the remote server and other functions to run on the native Python environment. It uses functional programming and meta-programming language to dynamically construct Python codes containing actual geospatial calculations, send them to a server and retrieve results. Arc4nix allows users to employ their arcpy-based script in a Cloud Computing and HPC environment with minimal or no modification. It also supports parallelizing tasks using multiple CPU cores and nodes for large-scale analyses. A case study of geospatial processing of a numerical weather model's output shows that arcpy scales linearly in a distributed environment. Arc4nix is open-source software.

  14. ADGEN: ADjoint GENerator for computer models

    Energy Technology Data Exchange (ETDEWEB)

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs.

  15. ADGEN: ADjoint GENerator for computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs

  16. Undergraduate students’ challenges with computational modelling in physics

    Directory of Open Access Journals (Sweden)

    Simen A. Sørby

    2012-12-01

    Full Text Available In later years, computational perspectives have become essential parts in several of the University of Oslo’s natural science studies. In this paper we discuss some main findings from a qualitative study of the computational perspectives’ impact on the students’ work with their first course in physics– mechanics – and their learning and meaning making of its contents. Discussions of the students’ learning of physics are based on sociocultural theory, which originates in Vygotsky and Bakhtin, and subsequent physics education research. Results imply that the greatest challenge for students when working with computational assignments is to combine knowledge from previously known, but separate contexts. Integrating knowledge of informatics, numerical and analytical mathematics and conceptual understanding of physics appears as a clear challenge for the students. We also observe alack of awareness concerning the limitations of physical modelling. The students need help with identifying the appropriate knowledge system or “tool set”, for the different tasks at hand; they need helpto create a plan for their modelling and to become aware of its limits. In light of this, we propose thatan instructive and dialogic text as basis for the exercises, in which the emphasis is on specification, clarification and elaboration, would be of potential great aid for students who are new to computational modelling.

  17. An Improved Mathematical Model for Computing Power Output of Solar Photovoltaic Modules

    Directory of Open Access Journals (Sweden)

    Abdul Qayoom Jakhrani

    2014-01-01

    Full Text Available It is difficult to determine the input parameters values for equivalent circuit models of photovoltaic modules through analytical methods. Thus, the previous researchers preferred to use numerical methods. Since, the numerical methods are time consuming and need long term time series data which is not available in most developing countries, an improved mathematical model was formulated by combination of analytical and numerical methods to overcome the limitations of existing methods. The values of required model input parameters were computed analytically. The expression for output current of photovoltaic module was determined explicitly by Lambert W function and voltage was determined numerically by Newton-Raphson method. Moreover, the algebraic equations were derived for the shape factor which involves the ideality factor and the series resistance of a single diode photovoltaic module power output model. The formulated model results were validated with rated power output of a photovoltaic module provided by manufacturers using local meteorological data, which gave ±2% error. It was found that the proposed model is more practical in terms of precise estimations of photovoltaic module power output for any required location and number of variables used.

  18. Analytical methods used at model facility

    International Nuclear Information System (INIS)

    Wing, N.S.

    1984-01-01

    A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy

  19. Analytical model for Stirling cycle machine design

    Energy Technology Data Exchange (ETDEWEB)

    Formosa, F. [Laboratoire SYMME, Universite de Savoie, BP 80439, 74944 Annecy le Vieux Cedex (France); Despesse, G. [Laboratoire Capteurs Actionneurs et Recuperation d' Energie, CEA-LETI-MINATEC, Grenoble (France)

    2010-10-15

    In order to study further the promising free piston Stirling engine architecture, there is a need of an analytical thermodynamic model which could be used in a dynamical analysis for preliminary design. To aim at more realistic values, the models have to take into account the heat losses and irreversibilities on the engine. An analytical model which encompasses the critical flaws of the regenerator and furthermore the heat exchangers effectivenesses has been developed. This model has been validated using the whole range of the experimental data available from the General Motor GPU-3 Stirling engine prototype. The effects of the technological and operating parameters on Stirling engine performance have been investigated. In addition to the regenerator influence, the effect of the cooler effectiveness is underlined. (author)

  20. Bridging Numerical and Analytical Models of Transient Travel Time Distributions: Challenges and Opportunities

    Science.gov (United States)

    Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.

    2017-12-01

    Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective

  1. Simple analytical methods for computing the gravity-wave contribution to the cosmic background radiation anisotropy

    International Nuclear Information System (INIS)

    Wang, Y.

    1996-01-01

    We present two simple analytical methods for computing the gravity-wave contribution to the cosmic background radiation (CBR) anisotropy in inflationary models; one method uses a time-dependent transfer function, the other methods uses an approximate gravity-mode function which is a simple combination of the lowest order spherical Bessel functions. We compare the CBR anisotropy tensor multipole spectrum computed using our methods with the previous result of the highly accurate numerical method, the open-quote open-quote Boltzmann close-quote close-quote method. Our time-dependent transfer function is more accurate than the time-independent transfer function found by Turner, White, and Lindsey; however, we find that the transfer function method is only good for l approx-lt 120. Using our approximate gravity-wave mode function, we obtain much better accuracy; the tensor multipole spectrum we find differs by less than 2% for l approx-lt 50, less than 10% for l approx-lt 120, and less than 20% for l≤300 from the open-quote open-quote Boltzmann close-quote close-quote result. Our approximate graviton mode function should be quite useful in studying tensor perturbations from inflationary models. copyright 1996 The American Physical Society

  2. Analytical estimation of effective charges at saturation in Poisson-Boltzmann cell models

    International Nuclear Information System (INIS)

    Trizac, Emmanuel; Aubouy, Miguel; Bocquet, Lyderic

    2003-01-01

    We propose a simple approximation scheme for computing the effective charges of highly charged colloids (spherical or cylindrical with infinite length). Within non-linear Poisson-Boltzmann theory, we start from an expression for the effective charge in the infinite-dilution limit which is asymptotically valid for large salt concentrations; this result is then extended to finite colloidal concentration, approximating the salt partitioning effect which relates the salt content in the suspension to that of a dialysing reservoir. This leads to an analytical expression for the effective charge as a function of colloid volume fraction and salt concentration. These results compare favourably with the effective charges at saturation (i.e. in the limit of large bare charge) computed numerically following the standard prescription proposed by Alexander et al within the cell model

  3. A genetic algorithm-based job scheduling model for big data analytics.

    Science.gov (United States)

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  4. Elliptic-cylindrical analytical flux-rope model for ICMEs

    Science.gov (United States)

    Nieves-Chinchilla, T.; Linton, M.; Hidalgo, M. A. U.; Vourlidas, A.

    2016-12-01

    We present an analytical flux-rope model for realistic magnetic structures embedded in Interplanetary Coronal Mass Ejections. The framework of this model was established by Nieves-Chinchilla et al. (2016) with the circular-cylindrical analytical flux rope model and under the concept developed by Hidalgo et al. (2002). Elliptic-cylindrical geometry establishes the first-grade of complexity of a series of models. The model attempts to describe the magnetic flux rope topology with distorted cross-section as a possible consequence of the interaction with the solar wind. In this model, the flux rope is completely described in the non-euclidean geometry. The Maxwell equations are solved using tensor calculus consistently with the geometry chosen, invariance along the axial component, and with the only assumption of no radial current density. The model is generalized in terms of the radial dependence of the poloidal current density component and axial current density component. The misalignment between current density and magnetic field is studied in detail for the individual cases of different pairs of indexes for the axial and poloidal current density components. This theoretical analysis provides a map of the force distribution inside of the flux-rope. The reconstruction technique has been adapted to the model and compared with in situ ICME set of events with different in situ signatures. The successful result is limited to some cases with clear in-situ signatures of distortion. However, the model adds a piece in the puzzle of the physical-analytical representation of these magnetic structures. Other effects such as axial curvature, expansion and/or interaction could be incorporated in the future to fully understand the magnetic structure. Finally, the mathematical formulation of this model opens the door to the next model: toroidal flux rope analytical model.

  5. Analytical Model for Fictitious Crack Propagation in Concrete Beams

    DEFF Research Database (Denmark)

    Ulfkjær, J. P.; Krenk, S.; Brincker, Rune

    An analytical model for load-displacement curves of unreinforced notched and un-notched concrete beams is presented. The load displacement-curve is obtained by combining two simple models. The fracture is modelled by a fictitious crack in an elastic layer around the mid-section of the beam. Outside...... the elastic layer the deformations are modelled by the Timoshenko beam theory. The state of stress in the elastic layer is assumed to depend bi-lineary on local elongation corresponding to a linear softening relation for the fictitious crack. For different beam size results from the analytical model...... is compared with results from a more accurate model based on numerical methods. The analytical model is shown to be in good agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. Several general results are obtained. It is shown that the point on the load...

  6. Simulation model of load balancing in distributed computing systems

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  7. Analytical dynamic modeling of fast trilayer polypyrrole bending actuators

    International Nuclear Information System (INIS)

    Amiri Moghadam, Amir Ali; Moavenian, Majid; Tahani, Masoud; Torabi, Keivan

    2011-01-01

    Analytical modeling of conjugated polymer actuators with complicated electro-chemo-mechanical dynamics is an interesting area for research, due to the wide range of applications including biomimetic robots and biomedical devices. Although there have been extensive reports on modeling the electrochemical dynamics of polypyrrole (PPy) bending actuators, mechanical dynamics modeling of the actuators remains unexplored. PPy actuators can operate with low voltage while producing large displacement in comparison to robotic joints, they do not have friction or backlash, but they suffer from some disadvantages such as creep and hysteresis. In this paper, a complete analytical dynamic model for fast trilayer polypyrrole bending actuators has been proposed and named the analytical multi-domain dynamic actuator (AMDDA) model. First an electrical admittance model of the actuator will be obtained based on a distributed RC line; subsequently a proper mechanical dynamic model will be derived, based on Hamilton's principle. The purposed modeling approach will be validated based on recently published experimental results

  8. Development of collaborative-creative learning model using virtual laboratory media for instrumental analytical chemistry lectures

    Science.gov (United States)

    Zurweni, Wibawa, Basuki; Erwin, Tuti Nurian

    2017-08-01

    The framework for teaching and learning in the 21st century was prepared with 4Cs criteria. Learning providing opportunity for the development of students' optimal creative skills is by implementing collaborative learning. Learners are challenged to be able to compete, work independently to bring either individual or group excellence and master the learning material. Virtual laboratory is used for the media of Instrumental Analytical Chemistry (Vis, UV-Vis-AAS etc) lectures through simulations computer application and used as a substitution for the laboratory if the equipment and instruments are not available. This research aims to design and develop collaborative-creative learning model using virtual laboratory media for Instrumental Analytical Chemistry lectures, to know the effectiveness of this design model adapting the Dick & Carey's model and Hannafin & Peck's model. The development steps of this model are: needs analyze, design collaborative-creative learning, virtual laboratory media using macromedia flash, formative evaluation and test of learning model effectiveness. While, the development stages of collaborative-creative learning model are: apperception, exploration, collaboration, creation, evaluation, feedback. Development of collaborative-creative learning model using virtual laboratory media can be used to improve the quality learning in the classroom, overcome the limitation of lab instruments for the real instrumental analysis. Formative test results show that the Collaborative-Creative Learning Model developed meets the requirements. The effectiveness test of students' pretest and posttest proves significant at 95% confidence level, t-test higher than t-table. It can be concluded that this learning model is effective to use for Instrumental Analytical Chemistry lectures.

  9. Writing analytic element programs in Python.

    Science.gov (United States)

    Bakker, Mark; Kelson, Victor A

    2009-01-01

    The analytic element method is a mesh-free approach for modeling ground water flow at both the local and the regional scale. With the advent of the Python object-oriented programming language, it has become relatively easy to write analytic element programs. In this article, an introduction is given of the basic principles of the analytic element method and of the Python programming language. A simple, yet flexible, object-oriented design is presented for analytic element codes using multiple inheritance. New types of analytic elements may be added without the need for any changes in the existing part of the code. The presented code may be used to model flow to wells (with either a specified discharge or drawdown) and streams (with a specified head). The code may be extended by any hydrogeologist with a healthy appetite for writing computer code to solve more complicated ground water flow problems. Copyright © 2009 The Author(s). Journal Compilation © 2009 National Ground Water Association.

  10. Analytic nearest neighbour model for FCC metals

    International Nuclear Information System (INIS)

    Idiodi, J.O.A.; Garba, E.J.D.; Akinlade, O.

    1991-06-01

    A recently proposed analytic nearest-neighbour model for fcc metals is criticised and two alternative nearest-neighbour models derived from the separable potential method (SPM) are recommended. Results for copper and aluminium illustrate the utility of the recommended models. (author). 20 refs, 5 tabs

  11. Analytical eigenstates for the quantum Rabi model

    International Nuclear Information System (INIS)

    Zhong, Honghua; Xie, Qiongtao; Lee, Chaohong; Batchelor, Murray T

    2013-01-01

    We develop a method to find analytical solutions for the eigenstates of the quantum Rabi model. These include symmetric, anti-symmetric and asymmetric analytic solutions given in terms of the confluent Heun functions. Both regular and exceptional solutions are given in a unified form. In addition, the analytic conditions for determining the energy spectrum are obtained. Our results show that conditions proposed by Braak (2011 Phys. Rev. Lett. 107 100401) are a type of sufficiency condition for determining the regular solutions. The well-known Judd isolated exact solutions appear naturally as truncations of the confluent Heun functions. (paper)

  12. Computational Modelling of Materials for Wind Turbine Blades: Selected DTU Wind Energy Activities.

    Science.gov (United States)

    Mikkelsen, Lars Pilgaard; Mishnaevsky, Leon

    2017-11-08

    Computational and analytical studies of degradation of wind turbine blade materials at the macro-, micro-, and nanoscale carried out by the modelling team of the Section Composites and Materials Mechanics, Department of Wind Energy, DTU, are reviewed. Examples of the analysis of the microstructural effects on the strength and fatigue life of composites are shown. Computational studies of degradation mechanisms of wind blade composites under tensile and compressive loading are presented. The effect of hybrid and nanoengineered structures on the performance of the composite was studied in computational experiments as well.

  13. Analytical Models Development of Compact Monopole Vortex Flows

    Directory of Open Access Journals (Sweden)

    Pavlo V. Lukianov

    2017-09-01

    Conclusions. The article contains series of the latest analytical models that describe both laminar and turbulent dynamics of monopole vortex flows which have not been reflected in traditional publications up to the present. The further research must be directed to search of analytical models for the coherent vortical structures in flows of viscous fluids, particularly near curved surfaces, where known in hydromechanics “wall law” is disturbed and heat and mass transfer anomalies take place.

  14. An analytical and experimental investigation of natural circulation transients in a model pressurized water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Massoud, M

    1987-01-01

    Natural Circulation phenomena in a simulated PWR was investigated experimentally and analytically. The experimental investigation included determination of system characteristics as well as system response to the imposed transient under symmetric and asymmetric operations. System characteristics were used to obtain correlation for heat transfer coefficient in heat exchangers, system flow resistance, and system buoyancy heat. Asymmetric transients were imposed to study flow oscillation and possible instability. The analytical investigation encompassed development of mathematical model for single-phase, steady-state and transient natural circulation as well as modification of existing model for two-phase flow analysis of phenomena such as small break LOCA, high pressure coolant injection and pump coast down. The developed mathematical model for single-phase analysis was computer coded to simulate the imposed transients. The computer program, entitled ''Symmetric and Asymmetric Analysis of Single-Phase Flow (SAS),'' were employed to simulate the imposed transients. It closely emulated the system behavior throughout the transient and subsequent steady-state. Modifications for two-phase flow analysis included addition of models for once-through steam generator and electric heater rods. Both programs are faster than real time. Off-line, they can be used for prediction and training applications while on-line they serve for simulation and signal validation. The programs can also be used to determine the sensitivity of natural circulation behavior to variation of inputs such as secondary distribution and power transients.

  15. Effects of Computer Based Learning on Students' Attitudes and Achievements towards Analytical Chemistry

    Science.gov (United States)

    Akcay, Husamettin; Durmaz, Asli; Tuysuz, Cengiz; Feyzioglu, Burak

    2006-01-01

    The aim of this study was to compare the effects of computer-based learning and traditional method on students' attitudes and achievement towards analytical chemistry. Students from Chemistry Education Department at Dokuz Eylul University (D.E.U) were selected randomly and divided into three groups; two experimental (Eg-1 and Eg-2) and a control…

  16. Analytical Model for Fictitious Crack Propagation in Concrete Beams

    DEFF Research Database (Denmark)

    Ulfkjær, J. P.; Krenk, Steen; Brincker, Rune

    1995-01-01

    An analytical model for load-displacement curves of concrete beams is presented. The load-displacement curve is obtained by combining two simple models. The fracture is modeled by a fictitious crack in an elastic layer around the midsection of the beam. Outside the elastic layer the deformations...... are modeled by beam theory. The state of stress in the elastic layer is assumed to depend bilinearly on local elongation corresponding to a linear softening relation for the fictitious crack. Results from the analytical model are compared with results from a more detailed model based on numerical methods...... for different beam sizes. The analytical model is shown to be in agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. It is shown that the point on the load-displacement curve where the fictitious crack starts to develop and the point where the real crack...

  17. Computational issues and applications of line-elements to model subsurface flow governed by the modified Helmholtz equation

    Science.gov (United States)

    Bakker, Mark; Kuhlman, Kristopher L.

    2011-09-01

    Two new approaches are presented for the accurate computation of the potential due to line elements that satisfy the modified Helmholtz equation with complex parameters. The first approach is based on fundamental solutions in elliptical coordinates and results in products of Mathieu functions. The second approach is based on the integration of modified Bessel functions. Both approaches allow evaluation of the potential at any distance from the element. The computational approaches are applied to model transient flow with the Laplace transform analytic element method. The Laplace domain solution is computed using a combination of point elements and the presented line elements. The time domain solution is obtained through a numerical inversion. Two applications are presented to transient flow fields, which could not be modeled with the Laplace transform analytic element method prior to this work. The first application concerns transient single-aquifer flow to wells near impermeable walls modeled with line-doublets. The second application concerns transient two-aquifer flow to a well near a stream modeled with line-sinks.

  18. Analytical models for the rewetting of hot surfaces

    International Nuclear Information System (INIS)

    Olek, S.

    1988-10-01

    Some aspects concerning analytical models for the rewetting of hot surface are discussed. These include the problems with applying various forms of boundary conditions, compatibility of boundary conditions with the physics of the rewetting problems, recent analytical models, the use of the separation of variables method versus the Wiener-Hopf technique, and the use of transformations. The report includes an updated list of rewetting models as well as benchmark solutions in tabular form for several models. It should be emphasized that this report is not meant to cover the topic of rewetting models. It merely discusses some points which are less commonly referred to in the literature. 93 refs., 3 figs., 22 tabs

  19. System of Systems Analytic Workbench - 2017

    Science.gov (United States)

    2017-08-31

    Genetic Algorithm and Particle Swarm Optimization with Type-2 Fuzzy Sets for Generating Systems of Systems Architectures. Procedia Computer Science...The application effort involves modeling an existing messaging network to perform real-time situational awareness. The Analytical Workbench’s

  20. Analytical and Computational Modeling of Mechanical Waves in Microscale Granular Crystals: Nonlinearity and Rotational Dynamics

    Science.gov (United States)

    Wallen, Samuel P.

    Granular media are one of the most common, yet least understood forms of matter on earth. The difficulties in understanding the physics of granular media stem from the fact that they are typically heterogeneous and highly disordered, and the grains interact via nonlinear contact forces. Historically, one approach to reducing these complexities and gaining new insight has been the study of granular crystals, which are ordered arrays of similarly-shaped particles (typically spheres) in Hertzian contact. Using this setting, past works explored the rich nonlinear dynamics stemming from contact forces, and proposed avenues where such granular crystals could form designer, dynamically responsive materials, which yield beneficial functionality in dynamic regimes. In recent years, the combination of self-assembly fabrication methods and laser ultrasonic experimental characterization have enabled the study of granular crystals at microscale. While our intuition may suggest that these microscale granular crystals are simply scaled-down versions of their macroscale counterparts, in fact, the relevant physics change drastically; for example, short-range adhesive forces between particles, which are negligible at macroscale, are several orders of magnitude stronger than gravity at microscale. In this thesis, we present recent advances in analytical and computational modeling of microscale granular crystals, in particular concerning the interplay of nonlinearity, shear interactions, and particle rotations, which have previously been either absent, or included separately at macroscale. Drawing inspiration from past works on phononic crystals and nonlinear lattices, we explore problems involving locally-resonant metamaterials, nonlinear localized modes, amplitude-dependent energy partition, and other rich dynamical phenomena. This work enhances our understanding of microscale granular media, which may find applicability in fields such as ultrasonic wave tailoring, signal processing

  1. A comparison of two analytical evaluation methods for educational computer games for young children

    NARCIS (Netherlands)

    Bekker, M.M.; Baauw, E.; Barendregt, W.

    2008-01-01

    In this paper we describe a comparison of two analytical methods for educational computer games for young children. The methods compared in the study are the Structured Expert Evaluation Method (SEEM) and the Combined Heuristic Evaluation (HE) (based on a combination of Nielsen’s HE and the

  2. Computational Modelling of Materials for Wind Turbine Blades: Selected DTUWind Energy Activities

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Pilgaard; Mishnaevsky, Leon

    2017-01-01

    Computational and analytical studies of degradation of wind turbine blade materials at the macro-, micro-, and nanoscale carried out by the modelling team of the Section Composites and Materials Mechanics, Department of Wind Energy, DTU, are reviewed. Examples of the analysis of the microstructural...... effects on the strength and fatigue life of composites are shown. Computational studies of degradation mechanisms of wind blade composites under tensile and compressive loading are presented. The effect of hybrid and nanoengineered structures on the performance of the composite was studied...

  3. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  4. Computational analysis of integrated biosensing and shear flow in a microfluidic vascular model

    Science.gov (United States)

    Wong, Jeremy F.; Young, Edmond W. K.; Simmons, Craig A.

    2017-11-01

    Fluid flow and flow-induced shear stress are critical components of the vascular microenvironment commonly studied using microfluidic cell culture models. Microfluidic vascular models mimicking the physiological microenvironment also offer great potential for incorporating on-chip biomolecular detection. In spite of this potential, however, there are few examples of such functionality. Detection of biomolecules released by cells under flow-induced shear stress is a significant challenge due to severe sample dilution caused by the fluid flow used to generate the shear stress, frequently to the extent where the analyte is no longer detectable. In this work, we developed a computational model of a vascular microfluidic cell culture model that integrates physiological shear flow and on-chip monitoring of cell-secreted factors. Applicable to multilayer device configurations, the computational model was applied to a bilayer configuration, which has been used in numerous cell culture applications including vascular models. Guidelines were established that allow cells to be subjected to a wide range of physiological shear stress while ensuring optimal rapid transport of analyte to the biosensor surface and minimized biosensor response times. These guidelines therefore enable the development of microfluidic vascular models that integrate cell-secreted factor detection while addressing flow constraints imposed by physiological shear stress. Ultimately, this work will result in the addition of valuable functionality to microfluidic cell culture models that further fulfill their potential as labs-on-chips.

  5. Exploiting Analytics Techniques in CMS Computing Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Bonacorsi, D. [Bologna U.; Kuznetsov, V. [Cornell U.; Magini, N. [Fermilab; Repečka, A. [Vilnius U.; Vaandering, E. [Fermilab

    2017-11-22

    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.

  6. On application of analytical transformation system using a computer for Feynman intearal calculation

    International Nuclear Information System (INIS)

    Gerdt, V.P.

    1978-01-01

    Various systems of analytic transformations for the calculation of Feynman integrals using computers are discussed. The hyperspheric technique Which is used to calculate Feynman integrals enables to perform angular integration for a set of diagrams, thus reducing the multiplicity of integral. All calculations based on this method are made with the ASHMEDAL program. Feynman integrals are calculated in Euclidean space using integration by parts and some differential identities. Analytic calculation of Feynman integral is performed by the MACSYMA system. Dispersion method of integral calculation is implemented in the SCHOONSCHIP system, calculations based on features of Nielsen function are made using efficient SINAC and RSIN programs. A tube of basic Feynman integral parameters calculated using the above techniques is given

  7. Analytical modeling of worldwide medical radiation use

    International Nuclear Information System (INIS)

    Mettler, F.A. Jr.; Davis, M.; Kelsey, C.A.; Rosenberg, R.; Williams, A.

    1987-01-01

    An analytical model was developed to estimate the availability and frequency of medical radiation use on a worldwide basis. This model includes medical and dental x-ray, nuclear medicine, and radiation therapy. The development of an analytical model is necessary as the first step in estimating the radiation dose to the world's population from this source. Since there is no data about the frequency of medical radiation use in more than half the countries in the world and only fragmentary data in an additional one-fourth of the world's countries, such a model can be used to predict the uses of medical radiation in these countries. The model indicates that there are approximately 400,000 medical x-ray machines worldwide and that approximately 1.2 billion diagnostic medical x-ray examinations are performed annually. Dental x-ray examinations are estimated at 315 million annually and approximately 22 million in-vivo diagnostic nuclear medicine examinations. Approximately 4 million radiation therapy procedures or courses of treatment are undertaken annually

  8. Analytic investigation of extended Heitler-Matthews model

    Energy Technology Data Exchange (ETDEWEB)

    Grimm, Stefan; Veberic, Darko; Engel, Ralph [KIT, IKP (Germany)

    2016-07-01

    Many features of extensive air showers are qualitatively well described by the Heitler cascade model and its extensions. The core of a shower is given by hadrons that interact with air nuclei. After each interaction some of these hadrons decay and feed the electromagnetic shower component. The most important parameters of such hadronic interactions are inelasticity, multiplicity, and the ratio of charged vs. neutral particles. However, in analytic considerations approximations are needed to include the characteristics of hadron production. We discuss extensions of the simple cascade model by analytic description of air showers by cascade models which include also the elasticity, and derive the number of produced muons. In a second step we apply this model to calculate the dependence of the shower center of gravity on model parameters. The depth of the center of gravity is closely related to that of the shower maximum, which is a commonly-used composition-sensitive observable.

  9. Analytical modeling of pressure transient behavior for coalbed methane transport in anisotropic media

    International Nuclear Information System (INIS)

    Wang, Lei; Wang, Xiaodong

    2014-01-01

    Resulting from the nature of anisotropy of coal media, it is a meaningful work to evaluate pressure transient behavior and flow characteristics within coals. In this article, a complete analytical model called the elliptical flow model is established by combining the theory of elliptical flow in anisotropic media and Fick's laws about the diffusion of coalbed methane. To investigate pressure transient behavior, analytical solutions were first obtained through introducing a series of special functions (Mathieu functions), which are extremely complex and are hard to calculate. Thus, a computer program was developed to establish type curves, on which the effects of the parameters, including anisotropy coefficient, storage coefficient, transfer coefficient and rate constant, were analyzed in detail. Calculative results show that the existence of anisotropy would cause great pressure depletion. To validate new analytical solutions, previous results were used to compare with the new results. It is found that a better agreement between the solutions obtained in this work and the literature was achieved. Finally, a case study is used to explain the effects of the parameters, including rock total compressibility coefficient, coal medium porosity and anisotropic permeability, sorption time constant, Langmuir volume and fluid viscosity, on bottom-hole pressure behavior. It is necessary to coordinate these parameters so as to reduce the pressure depletion. (paper)

  10. Analytical and numerical models of transport in porous cementitious materials

    International Nuclear Information System (INIS)

    Garboczi, E.J.; Bentz, D.P.

    1990-01-01

    Most chemical and physical processes that degrade cementitious materials are dependent on an external source of either water or ions or both. Understanding the rates of these processes at the microstructural level is necessary in order to develop a sound scientific basis for the prediction and control of the service life of cement-based materials, especially for radioactive-waste containment materials that are required to have service lives on the order of hundreds of years. An important step in developing this knowledge is to understand how transport coefficients, such as diffusivity and permeability, depend on the pore structure. Fluid flow under applied pressure gradients and ionic diffusion under applied concentration gradients are important transport mechanisms that take place in the pore space of cementitious materials. This paper describes: (1) a new analytical percolation-theory-based equation for calculating the permeability of porous materials, (2) new computational methods for computing effective diffusivities of microstructural models or digitized images of actual porous materials, and (3) a new digitized-image mercury intrusion simulation technique

  11. Assessment of effectiveness of geologic isolation systems. Analytic modeling of flow in a permeable fissured medium

    International Nuclear Information System (INIS)

    Strack, O.D.L.

    1982-02-01

    An analytic model has been developed for two dimensional steady flow through infinite fissured porous media, and is implemented in a computer program. The model is the first, and major, step toward the development of a model with finite boundaries, intended for use as a tool for numerical experiments. These experiments may serve to verify some of the simplifying assumptions made in continuum models and to gain insight in the mechanics of the flow. The model is formulated in terms of complex variables and the analytic functions presented are closed-form expressions obtained from singular Cauchy integrals. An exact solution is given for the case of a single crack in an infinite porous medium. The exact solution is compared with the result obtained by the use of an independent method, which assumes Darcian flow in the crack and models the crack as an inhomogeneity in the permeability, in order to verify the simplifying assumptions. The approximate model is compared with solutions obtained from the above independent method for some cases of intersecting cracks. The agreement is good, provided that a sufficient number of elements are used to model the cracks

  12. Analytical Modeling for Underground Risk Assessment in Smart Cities

    Directory of Open Access Journals (Sweden)

    Israr Ullah

    2018-06-01

    Full Text Available In the developed world, underground facilities are increasing day-by-day, as it is considered as an improved utilization of available space in smart cities. Typical facilities include underground railway lines, electricity lines, parking lots, water supply systems, sewerage network, etc. Besides its utility, these facilities also pose serious threats to citizens and property. To preempt accidental loss of precious human lives and properties, a real time monitoring system is highly desirable for conducting risk assessment on continuous basis and timely report any abnormality before its too late. In this paper, we present an analytical formulation to model system behavior for risk analysis and assessment based on various risk contributing factors. Based on proposed analytical model, we have evaluated three approximation techniques for computing final risk index: (a simple linear approximation based on multiple linear regression analysis; (b hierarchical fuzzy logic based technique in which related risk factors are combined in a tree like structure; and (c hybrid approximation approach which is a combination of (a and (b. Experimental results shows that simple linear approximation fails to accurately estimate final risk index as compared to hierarchical fuzzy logic based system which shows that the latter provides an efficient method for monitoring and forecasting critical issues in the underground facilities and may assist in maintenance efficiency as well. Estimation results based on hybrid approach fails to accurately estimate final risk index. However, hybrid scheme reveals some interesting and detailed information by performing automatic clustering based on location risk index.

  13. Reference interaction site model with hydrophobicity induced density inhomogeneity: An analytical theory to compute solvation properties of large hydrophobic solutes in the mixture of polyatomic solvent molecules

    International Nuclear Information System (INIS)

    Cao, Siqin; Sheong, Fu Kit; Huang, Xuhui

    2015-01-01

    Reference interaction site model (RISM) has recently become a popular approach in the study of thermodynamical and structural properties of the solvent around macromolecules. On the other hand, it was widely suggested that there exists water density depletion around large hydrophobic solutes (>1 nm), and this may pose a great challenge to the RISM theory. In this paper, we develop a new analytical theory, the Reference Interaction Site Model with Hydrophobicity induced density Inhomogeneity (RISM-HI), to compute solvent radial distribution function (RDF) around large hydrophobic solute in water as well as its mixture with other polyatomic organic solvents. To achieve this, we have explicitly considered the density inhomogeneity at the solute-solvent interface using the framework of the Yvon-Born-Green hierarchy, and the RISM theory is used to obtain the solute-solvent pair correlation. In order to efficiently solve the relevant equations while maintaining reasonable accuracy, we have also developed a new closure called the D2 closure. With this new theory, the solvent RDFs around a large hydrophobic particle in water and different water-acetonitrile mixtures could be computed, which agree well with the results of the molecular dynamics simulations. Furthermore, we show that our RISM-HI theory can also efficiently compute the solvation free energy of solute with a wide range of hydrophobicity in various water-acetonitrile solvent mixtures with a reasonable accuracy. We anticipate that our theory could be widely applied to compute the thermodynamic and structural properties for the solvation of hydrophobic solute

  14. Analytical Model-based Fault Detection and Isolation in Control Systems

    DEFF Research Database (Denmark)

    Vukic, Z.; Ozbolt, H.; Blanke, M.

    1998-01-01

    The paper gives an introduction and an overview of the field of fault detection and isolation for control systems. The summary of analytical (quantitative model-based) methodds and their implementation are presented. The focus is given to mthe analytical model-based fault-detection and fault...

  15. Computer modeling of liquid crystals

    International Nuclear Information System (INIS)

    Al-Barwani, M.S.

    1999-01-01

    In this thesis, we investigate several aspects of the behaviour of liquid crystal molecules near interfaces using computer simulation. We briefly discuss experiment, theoretical and computer simulation studies of some of the liquid crystal interfaces. We then describe three essentially independent research topics. The first of these concerns extensive simulations of a liquid crystal formed by long flexible molecules. We examined the bulk behaviour of the model and its structure. Studies of a film of smectic liquid crystal surrounded by vapour were also carried out. Extensive simulations were also done for a long-molecule/short-molecule mixture, studies were then carried out to investigate the liquid-vapour interface of the mixture. Next, we report the results of large scale simulations of soft-spherocylinders of two different lengths. We examined the bulk coexistence of the nematic and isotropic phases of the model. Once the bulk coexistence behaviour was known, properties of the nematic-isotropic interface were investigated. This was done by fitting order parameter and density profiles to appropriate mathematical functions and calculating the biaxial order parameter. We briefly discuss the ordering at the interfaces and make attempts to calculate the surface tension. Finally, in our third project, we study the effects of different surface topographies on creating bistable nematic liquid crystal devices. This was carried out using a model based on the discretisation of the free energy on a lattice. We use simulation to find the lowest energy states and investigate if they are degenerate in energy. We also test our model by studying the Frederiks transition and comparing with analytical and other simulation results. (author)

  16. Computational Modeling | Bioenergy | NREL

    Science.gov (United States)

    cell walls and are the source of biofuels and biomaterials. Our modeling investigates their properties . Quantum Mechanical Models NREL studies chemical and electronic properties and processes to reduce barriers Computational Modeling Computational Modeling NREL uses computational modeling to increase the

  17. A Generative Computer Model for Preliminary Design of Mass Housing

    Directory of Open Access Journals (Sweden)

    Ahmet Emre DİNÇER

    2014-05-01

    Full Text Available Today, we live in what we call the “Information Age”, an age in which information technologies are constantly being renewed and developed. Out of this has emerged a new approach called “Computational Design” or “Digital Design”. In addition to significantly influencing all fields of engineering, this approach has come to play a similar role in all stages of the design process in the architectural field. In providing solutions for analytical problems in design such as cost estimate, circulation systems evaluation and environmental effects, which are similar to engineering problems, this approach is being used in the evaluation, representation and presentation of traditionally designed buildings. With developments in software and hardware technology, it has evolved as the studies based on design of architectural products and production implementations with digital tools used for preliminary design stages. This paper presents a digital model which may be used in the preliminary stage of mass housing design with Cellular Automata, one of generative design systems based on computational design approaches. This computational model, developed by scripts of 3Ds Max software, has been implemented on a site plan design of mass housing, floor plan organizations made by user preferences and facade designs. By using the developed computer model, many alternative housing types could be rapidly produced. The interactive design tool of this computational model allows the user to transfer dimensional and functional housing preferences by means of the interface prepared for model. The results of the study are discussed in the light of innovative architectural approaches.

  18. Computational Intelligence, Cyber Security and Computational Models

    CERN Document Server

    Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel

    2014-01-01

    This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.

  19. Evolution of perturbed dynamical systems: analytical computation with time independent accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Gurzadyan, A.V. [Russian-Armenian (Slavonic) University, Department of Mathematics and Mathematical Modelling, Yerevan (Armenia); Kocharyan, A.A. [Monash University, School of Physics and Astronomy, Clayton (Australia)

    2016-12-15

    An analytical method for investigation of the evolution of dynamical systems with independent on time accuracy is developed for perturbed Hamiltonian systems. The error-free estimation using of computer algebra enables the application of the method to complex multi-dimensional Hamiltonian and dissipative systems. It also opens principal opportunities for the qualitative study of chaotic trajectories. The performance of the method is demonstrated on perturbed two-oscillator systems. It can be applied to various non-linear physical and astrophysical systems, e.g. to long-term planetary dynamics. (orig.)

  20. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    Science.gov (United States)

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines

  1. Measuring Students' Writing Ability on a Computer-Analytic Developmental Scale: An Exploratory Validity Study

    Science.gov (United States)

    Burdick, Hal; Swartz, Carl W.; Stenner, A. Jackson; Fitzgerald, Jill; Burdick, Don; Hanlon, Sean T.

    2013-01-01

    The purpose of the study was to explore the validity of a novel computer-analytic developmental scale, the Writing Ability Developmental Scale. On the whole, collective results supported the validity of the scale. It was sensitive to writing ability differences across grades and sensitive to within-grade variability as compared to human-rated…

  2. Development and testing of analytical models for the pebble bed type HTRs

    International Nuclear Information System (INIS)

    Huda, M.Q.; Obara, T.

    2008-01-01

    The pebble bed type gas cooled high temperature reactor (HTR) appears to be a good candidate for the next generation nuclear reactor technology. These reactors have unique characteristics in terms of the randomness in geometry, and require special techniques to analyze their systems. This study includes activities concerning the testing of computational tools and the qualification of models. Indeed, it is essential that the validated analytical tools be available to the research community. From this viewpoint codes like MCNP, ORIGEN and RELAP5, which have been used in nuclear industry for many years, are selected to identify and develop new capabilities needed to support HTR analysis. The geometrical model of the full reactor is obtained by using lattice and universe facilities provided by MCNP. The coupled MCNP-ORIGEN code is used to estimate the burnup and the refuelling scheme. Results obtained from Monte Carlo analysis are interfaced with RELAP5 to analyze the thermal hydraulics and safety characteristics of the reactor. New models and methodologies are developed for several past and present experimental and prototypical facilities that were based on HTR pebble bed concepts. The calculated results are compared with available experimental data and theoretical evaluations showing very good agreement. The ultimate goal of the validation of the computer codes for pebble bed HTR applications is to acquire and reinforce the capability of these general purpose computer codes for performing HTR core design and optimization studies

  3. Quantum decay model with exact explicit analytical solution

    Science.gov (United States)

    Marchewka, Avi; Granot, Er'El

    2009-01-01

    A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.

  4. Computer models of dipole magnets of a series 'VULCAN' for the ALICE experiment

    International Nuclear Information System (INIS)

    Vodop'yanov, A.S.; Shishov, Yu.A.; Yuldasheva, M.B.; Yuldashev, O.I.

    1998-01-01

    The paper is devoted to a construction of computer models for three magnets of the 'VULCAN' series in the framework of a differential approach for two scalar potentials. The distinctive property of these magnets is that they are 'warm' and their coils are of conic saddle shape. The algorithm of creating a computer model for the coils is suggested. The coil field is computed by Biot-Savart law and a part of the integrals is calculated with the help of analytical formulas. To compute three-dimensional magnetic fields by the finite element method with a local accuracy control, two new algorithms are suggested. The former is based on a comparison of the fields computed by means of linear and quadratic shape functions. The latter is based on a comparison of the field computed with the help of linear shape functions and a local classical solution. The distributions of the local accuracy control characteristics within a working part of the third magnet and the other results of the computations are presented

  5. Analytical Modeling Approach to Study Harmonic Mitigation in AC Grids with Active Impedance at Selective Frequencies

    Directory of Open Access Journals (Sweden)

    Gonzalo Abad

    2018-05-01

    Full Text Available This paper presents an analytical model, oriented to study harmonic mitigation aspects in AC grids. As it is well known, the presence of non-desired harmonics in AC grids can be palliated in several manners. However, in this paper, a power electronic-based active impedance at selective frequencies (ACISEF is used, due to its already proven flexibility and adaptability to the changing characteristics of AC grids. Hence, the proposed analytical model approach is specially conceived to globally consider both the model of the AC grid itself with its electric equivalent impedances, together with the power electronic-based ACISEF, including its control loops. In addition, the proposed analytical model presents practical and useful properties, as it is simple to understand and simple to use, it has low computational cost and simple adaptability to different scenarios of AC grids, and it provides an accurate enough representation of the reality. The benefits of using the proposed analytical model are shown in this paper through some examples of its usefulness, including an analysis of stability and the identification of sources of instability for a robust design, an analysis of effectiveness in harmonic mitigation, an analysis to assist in the choice of the most suitable active impedance under a given state of the AC grid, an analysis of the interaction between different compensators, and so on. To conclude, experimental validation of a 2.15 kA ACISEF in a real 33 kV AC grid is provided, in which real users (household and industry loads and crucial elements such as wind parks and HVDC systems are near inter-connected.

  6. Analytical Solutions for Rumor Spreading Dynamical Model in a Social Network

    Science.gov (United States)

    Fallahpour, R.; Chakouvari, S.; Askari, H.

    2015-03-01

    In this paper, Laplace Adomian decomposition method is utilized for evaluating of spreading model of rumor. Firstly, a succinct review is constructed on the subject of using analytical methods such as Adomian decomposion method, Variational iteration method and Homotopy Analysis method for epidemic models and biomathematics. In continue a spreading model of rumor with consideration of forgetting mechanism is assumed and subsequently LADM is exerted for solving of it. By means of the aforementioned method, a general solution is achieved for this problem which can be readily employed for assessing of rumor model without exerting any computer program. In addition, obtained consequences for this problem are discussed for different cases and parameters. Furthermore, it is shown the method is so straightforward and fruitful for analyzing equations which have complicated terms same as rumor model. By employing numerical methods, it is revealed LADM is so powerful and accurate for eliciting solutions of this model. Eventually, it is concluded that this method is so appropriate for this problem and it can provide researchers a very powerful vehicle for scrutinizing rumor models in diverse kinds of social networks such as Facebook, YouTube, Flickr, LinkedIn and Tuitor.

  7. Exact analytical modeling of magnetic vector potential in surface inset permanent magnet DC machines considering magnet segmentation

    Science.gov (United States)

    Jabbari, Ali

    2018-01-01

    Surface inset permanent magnet DC machine can be used as an alternative in automation systems due to their high efficiency and robustness. Magnet segmentation is a common technique in order to mitigate pulsating torque components in permanent magnet machines. An accurate computation of air-gap magnetic field distribution is necessary in order to calculate machine performance. An exact analytical method for magnetic vector potential calculation in surface inset permanent magnet machines considering magnet segmentation has been proposed in this paper. The analytical method is based on the resolution of Laplace and Poisson equations as well as Maxwell equation in polar coordinate by using sub-domain method. One of the main contributions of the paper is to derive an expression for the magnetic vector potential in the segmented PM region by using hyperbolic functions. The developed method is applied on the performance computation of two prototype surface inset magnet segmented motors with open circuit and on load conditions. The results of these models are validated through FEM method.

  8. Analytic approaches to atomic response properties

    International Nuclear Information System (INIS)

    Lamm, E.E.

    1980-01-01

    Many important response properties, e.g., multipole polarizabilites and sum rules, photodetachment cross sections, and closely-related long-range dispersion force coefficients, are insensitive to details of electronic structure. In this investigation, analytic asymptotic theories of atomic response properties are constructed that yield results as accurate as those obtained by more elaborate numerical methods. In the first chapter, a novel and simple method is used to determined the multipole sum rules S/sub l/(-k), for positive and negative values of k, of the hydrogen atom and the hydrogen negative ion in the asymptotic approximation. In the second chapter, an analytically-tractable extended asymptotic model for the response properites of weakly-bound anions is proposed and the multipole polarizability, multipole sum rules, and photodetachment cross section determined by the model are computed analytically. Dipole polarizabilities and photodetachment cross sections determined from the model for Li-, Na-, and K- are compared with the numercal results of Moores and Norcross. Agreement is typically within 15% if the pseudopotential is included. In the third chapter a comprehensive and unified treatment of atomic multipole oscillator strengths, dynamic multipole polarizabilites, and dispersion force constants in a variety of Coulomb-like approximations is presented. A theoretically and computationally superior modification of the original Bates-Damgaard (BD) procedure, referred to here as simply the Coulomb approximation (CA), is introduced. An analytic expression for the dynamic multipole polarizability is found which contains as special cases this quantity within the CA, the extended Coulomb approximation (ECA) of Adelman and Szabo, and the quantum defect orbital (QDO) method of Simons

  9. Design of homogeneous trench-assisted multi-core fibers based on analytical model

    DEFF Research Database (Denmark)

    Ye, Feihong; Tu, Jiajing; Saitoh, Kunimasa

    2016-01-01

    We present a design method of homogeneous trench-assisted multicore fibers (TA-MCFs) based on an analytical model utilizing an analytical expression for the mode coupling coefficient between two adjacent cores. The analytical model can also be used for crosstalk (XT) properties analysis, such as ...

  10. Analytical method of CIM to PIM transformation in Model Driven Architecture (MDA

    Directory of Open Access Journals (Sweden)

    Martin Kardos

    2010-06-01

    Full Text Available Information system’s models on higher level of abstraction have become a daily routine in many software companies. The concept of Model Driven Architecture (MDA published by standardization body OMG1 since 2001 has become a concept for creation of software applications and information systems. MDA specifies four levels of abstraction: top three levels are created as graphical models and the last one as implementation code model. Many research works of MDA are focusing on the lower levels and transformations between each other. The top level of abstraction, called Computation Independent Model (CIM and its transformation to the lower level called Platform Independent Model (PIM is not so extensive research topic. Considering to a great importance and usability of this level in practice of IS2Keywords: transformation, MDA, CIM, PIM, UML, DFD. development now our research activity is focused to this highest level of abstraction – CIM and its possible transformation to the lower PIM level. In this article we are presenting a possible solution of CIM modeling and its analytic method of transformation to PIM.

  11. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  12. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    Science.gov (United States)

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  13. Analytical and computational modelling for wave energy systems: the example of oscillating wave surge converters

    Science.gov (United States)

    Dias, Frédéric; Renzi, Emiliano; Gallagher, Sarah; Sarkar, Dripta; Wei, Yanji; Abadie, Thomas; Cummins, Cathal; Rafiee, Ashkan

    2017-08-01

    The development of new wave energy converters has shed light on a number of unanswered questions in fluid mechanics, but has also identified a number of new issues of importance for their future deployment. The main concerns relevant to the practical use of wave energy converters are sustainability, survivability, and maintainability. Of course, it is also necessary to maximize the capture per unit area of the structure as well as to minimize the cost. In this review, we consider some of the questions related to the topics of sustainability, survivability, and maintenance access, with respect to sea conditions, for generic wave energy converters with an emphasis on the oscillating wave surge converter. New analytical models that have been developed are a topic of particular discussion. It is also shown how existing numerical models have been pushed to their limits to provide answers to open questions relating to the operation and characteristics of wave energy converters.

  14. Analytical and computational modelling for wave energy systems: the example of oscillating wave surge converters.

    Science.gov (United States)

    Dias, Frédéric; Renzi, Emiliano; Gallagher, Sarah; Sarkar, Dripta; Wei, Yanji; Abadie, Thomas; Cummins, Cathal; Rafiee, Ashkan

    2017-01-01

    The development of new wave energy converters has shed light on a number of unanswered questions in fluid mechanics, but has also identified a number of new issues of importance for their future deployment. The main concerns relevant to the practical use of wave energy converters are sustainability, survivability, and maintainability. Of course, it is also necessary to maximize the capture per unit area of the structure as well as to minimize the cost. In this review, we consider some of the questions related to the topics of sustainability, survivability, and maintenance access, with respect to sea conditions, for generic wave energy converters with an emphasis on the oscillating wave surge converter. New analytical models that have been developed are a topic of particular discussion. It is also shown how existing numerical models have been pushed to their limits to provide answers to open questions relating to the operation and characteristics of wave energy converters.

  15. Computation of potentials from current electrodes in cylindrically stratified media: A stable, rescaled semi-analytical formulation

    Science.gov (United States)

    Moon, Haksu; Teixeira, Fernando L.; Donderici, Burkay

    2015-01-01

    We present an efficient and robust semi-analytical formulation to compute the electric potential due to arbitrary-located point electrodes in three-dimensional cylindrically stratified media, where the radial thickness and the medium resistivity of each cylindrical layer can vary by many orders of magnitude. A basic roadblock for robust potential computations in such scenarios is the poor scaling of modified-Bessel functions used for computation of the semi-analytical solution, for extreme arguments and/or orders. To accommodate this, we construct a set of rescaled versions of modified-Bessel functions, which avoids underflows and overflows in finite precision arithmetic, and minimizes round-off errors. In addition, several extrapolation methods are applied and compared to expedite the numerical evaluation of the (otherwise slowly convergent) associated Sommerfeld-type integrals. The proposed algorithm is verified in a number of scenarios relevant to geophysical exploration, but the general formulation presented is also applicable to other problems governed by Poisson equation such as Newtonian gravity, heat flow, and potential flow in fluid mechanics, involving cylindrically stratified environments.

  16. R2SM: a package for the analytic computation of the R2 Rational terms in the Standard Model of the Electroweak interactions

    International Nuclear Information System (INIS)

    Garzelli, M.V.

    2011-01-01

    The analytical package written in FORM presented in this paper allows the computation of the complete set of Feynman Rules producing the Rational terms of kind R 2 contributing to the virtual part of NLO corrections in the Standard Model of the Electroweak interactions. Building block topologies filled by means of generic scalars, vectors and fermions, allowing to build these Feynman Rules in terms of specific elementary particles, are explicitly given in the R ξ gauge class, together with the automatic dressing procedure to obtain the Feynman Rules from them. The results in more specific gauges, like the 't Hooft Feynman one, follow as particular cases, in both the HV and the FDH dimensional regularization schemes. As a check on our formulas, the gauge independence of the total Rational contribution (R 1 +R 2 ) to renormalized S-matrix elements is verified by considering the specific example of the H →γγ decay process at 1-loop. This package can be of interest for people aiming at a better understanding of the nature of the Rational terms. It is organized in a modular way, allowing a further use of some its files even in different contexts. Furthermore, it can be considered as a first seed in the effort towards a complete automation of the process of the analytical calculation of the R 2 effective vertices, given the Lagrangian of a generic gauge theory of particle interactions. (orig.)

  17. Rethinking Visual Analytics for Streaming Data Applications

    Energy Technology Data Exchange (ETDEWEB)

    Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris

    2017-01-01

    In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between the two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is massive

  18. MERRA Analytic Services: Meeting the Big Data Challenges of Climate Science through Cloud-Enabled Climate Analytics-as-a-Service

    Science.gov (United States)

    Schnase, J. L.; Duffy, D.; Tamkin, G. S.; Nadeau, D.; Thompson, J. H.; Grieg, C. M.; McInerney, M.; Webster, W. P.

    2013-12-01

    Climate science is a Big Data domain that is experiencing unprecedented growth. In our efforts to address the Big Data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS). We focus on analytics, because it is the knowledge gained from our interactions with Big Data that ultimately produce societal benefits. We focus on CAaaS because we believe it provides a useful way of thinking about the problem: a specialization of the concept of business process-as-a-service, which is an evolving extension of IaaS, PaaS, and SaaS enabled by Cloud Computing. Within this framework, Cloud Computing plays an important role; however, we see it as only one element in a constellation of capabilities that are essential to delivering climate analytics as a service. These elements are essential because in the aggregate they lead to generativity, a capacity for self-assembly that we feel is the key to solving many of the Big Data challenges in this domain. MERRA Analytic Services (MERRA/AS) is an example of cloud-enabled CAaaS built on this principle. MERRA/AS enables MapReduce analytics over NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) data collection. The MERRA reanalysis integrates observational data with numerical models to produce a global temporally and spatially consistent synthesis of 26 key climate variables. It represents a type of data product that is of growing importance to scientists doing climate change research and a wide range of decision support applications. MERRA/AS brings together the following generative elements in a full, end-to-end demonstration of CAaaS capabilities: (1) high-performance, data proximal analytics, (2) scalable data management, (3) software appliance virtualization, (4) adaptive analytics, and (5) a domain-harmonized API. The effectiveness of MERRA/AS has been demonstrated in several applications. In our experience, Cloud Computing lowers the barriers and risk to

  19. MERRA Analytic Services: Meeting the Big Data Challenges of Climate Science Through Cloud-enabled Climate Analytics-as-a-service

    Science.gov (United States)

    Schnase, John L.; Duffy, Daniel Quinn; Tamkin, Glenn S.; Nadeau, Denis; Thompson, John H.; Grieg, Christina M.; McInerney, Mark A.; Webster, William P.

    2014-01-01

    Climate science is a Big Data domain that is experiencing unprecedented growth. In our efforts to address the Big Data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS). We focus on analytics, because it is the knowledge gained from our interactions with Big Data that ultimately produce societal benefits. We focus on CAaaS because we believe it provides a useful way of thinking about the problem: a specialization of the concept of business process-as-a-service, which is an evolving extension of IaaS, PaaS, and SaaS enabled by Cloud Computing. Within this framework, Cloud Computing plays an important role; however, we it see it as only one element in a constellation of capabilities that are essential to delivering climate analytics as a service. These elements are essential because in the aggregate they lead to generativity, a capacity for self-assembly that we feel is the key to solving many of the Big Data challenges in this domain. MERRA Analytic Services (MERRAAS) is an example of cloud-enabled CAaaS built on this principle. MERRAAS enables MapReduce analytics over NASAs Modern-Era Retrospective Analysis for Research and Applications (MERRA) data collection. The MERRA reanalysis integrates observational data with numerical models to produce a global temporally and spatially consistent synthesis of 26 key climate variables. It represents a type of data product that is of growing importance to scientists doing climate change research and a wide range of decision support applications. MERRAAS brings together the following generative elements in a full, end-to-end demonstration of CAaaS capabilities: (1) high-performance, data proximal analytics, (2) scalable data management, (3) software appliance virtualization, (4) adaptive analytics, and (5) a domain-harmonized API. The effectiveness of MERRAAS has been demonstrated in several applications. In our experience, Cloud Computing lowers the barriers and risk to

  20. Computer generation of cobalt-60 single beam dose distribution using an analytical beam model

    International Nuclear Information System (INIS)

    Jayaraman, S.

    1981-01-01

    A beam dose calculation model based on evaluation of tissue air ratios (TAR) and scatter air ratios (SAR) for cobalt-60 beams of rectangular cross section has been developed. Off-central axis fall-off of primary radiation intensity is derived by an empirical formulation involving an arctangent function with the slope of the geometrical penumbra acting as an essential constant. Central axis TAR and SAR values are assessed by semi-empirical polynomial expressions employing the two sides of the rectangular field as the bariables. The model utilises a minimum number of parametric constants and is useful for computer generation of isodose curves. The model is capable of accounting for situations where wedge filters or split field shielding blocks, are encountered. Further it could be widely applied with minor modifications to several makes of the currently available cobalt-60 units. The paper explains the model and shows examples of the results obtained in comparison with the corresponding experimentally determined dose distributions. (orig.) [de

  1. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  2. Analytical Model for Sensor Placement on Microprocessors

    National Research Council Canada - National Science Library

    Lee, Kyeong-Jae; Skadron, Kevin; Huang, Wei

    2005-01-01

    .... In this paper, we present an analytical model that describes the maximum temperature differential between a hot spot and a region of interest based on their distance and processor packaging information...

  3. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  4. Two-dimensional analytical model of a proton exchange membrane fuel cell

    International Nuclear Information System (INIS)

    Liu, Jia Xing; Guo, Hang; Ye, Fang; Ma, Chong Fang

    2017-01-01

    In this study, a two-dimensional full cell analytical model of a proton exchange membrane fuel cell is developed. The analytical model describes electrochemical reactions on the anode and cathode catalyst layer, reactants diffusion in the gas diffusion layer, and gases flow in the gas channel, etc. The analytical solution is derived according to the basic physical equations. The performance predicted by the model is in good agreement with the experimental data. The results show that the polarization mainly occurs in the cathode side of the proton exchange membrane fuel cell. The anodic overpotential cannot be neglected. The hydrogen and oxygen concentrations decrease along the channel flow direction. The hydrogen and oxygen concentrations in the catalyst layer decrease with the current density. As predicted by the model, concentration polarization mainly occurs in the cathode side. - Highlights: • A 2D full cell analytical model of a proton exchange membrane fuel cell is developed. • The analytical solution is deduced according to the basic equations. • The anode overpotential is not so small that it cannot be neglected. • Species concentration distributions in the fuel cell is obtained and analyzed.

  5. Multi-model approach to petroleum resource appraisal using analytic methodologies for probabilistic systems

    Science.gov (United States)

    Crovelli, R.A.

    1988-01-01

    The geologic appraisal model that is selected for a petroleum resource assessment depends upon purpose of the assessment, basic geologic assumptions of the area, type of available data, time available before deadlines, available human and financial resources, available computer facilities, and, most importantly, the available quantitative methodology with corresponding computer software and any new quantitative methodology that would have to be developed. Therefore, different resource assessment projects usually require different geologic models. Also, more than one geologic model might be needed in a single project for assessing different regions of the study or for cross-checking resource estimates of the area. Some geologic analyses used in the past for petroleum resource appraisal involved play analysis. The corresponding quantitative methodologies of these analyses usually consisted of Monte Carlo simulation techniques. A probabilistic system of petroleum resource appraisal for play analysis has been designed to meet the following requirements: (1) includes a variety of geologic models, (2) uses an analytic methodology instead of Monte Carlo simulation, (3) possesses the capacity to aggregate estimates from many areas that have been assessed by different geologic models, and (4) runs quickly on a microcomputer. Geologic models consist of four basic types: reservoir engineering, volumetric yield, field size, and direct assessment. Several case histories and present studies by the U.S. Geological Survey are discussed. ?? 1988 International Association for Mathematical Geology.

  6. Analytical Model for High Impedance Fault Analysis in Transmission Lines

    Directory of Open Access Journals (Sweden)

    S. Maximov

    2014-01-01

    Full Text Available A high impedance fault (HIF normally occurs when an overhead power line physically breaks and falls to the ground. Such faults are difficult to detect because they often draw small currents which cannot be detected by conventional overcurrent protection. Furthermore, an electric arc accompanies HIFs, resulting in fire hazard, damage to electrical devices, and risk with human life. This paper presents an analytical model to analyze the interaction between the electric arc associated to HIFs and a transmission line. A joint analytical solution to the wave equation for a transmission line and a nonlinear equation for the arc model is presented. The analytical model is validated by means of comparisons between measured and calculated results. Several cases of study are presented which support the foundation and accuracy of the proposed model.

  7. Analytical modeling of post-tensioned precast beam-to-column connections

    International Nuclear Information System (INIS)

    Kaya, Mustafa; Arslan, A. Samet

    2009-01-01

    In this study, post-tensioned precast beam-to-column connections are tested experimentally at different stress levels, and are modelled analytically using 3D nonlinear finite element modelling method. ANSYS finite element software is used for this purposes. Nonlinear static analysis is used to determine the connection strength, behavior and stiffness when subjected to cyclic inelastic loads simulating ground excitation during an earthquake. The results obtained from the analytical studies are compared with the test results. In terms of stiffness, it was seen that the initial stiffness of the analytical models was lower than that of the tested specimens. As a result, modelling of these types of connection using 3D FEM can give crucial beforehand information, and overcome the disadvantages of time consuming workmanship and cost of experimental studies.

  8. Evaluation of Analytical Modeling Functions for the Phonation Onset Process

    Directory of Open Access Journals (Sweden)

    Simon Petermann

    2016-01-01

    Full Text Available The human voice originates from oscillations of the vocal folds in the larynx. The duration of the voice onset (VO, called the voice onset time (VOT, is currently under investigation as a clinical indicator for correct laryngeal functionality. Different analytical approaches for computing the VOT based on endoscopic imaging were compared to determine the most reliable method to quantify automatically the transient vocal fold oscillations during VO. Transnasal endoscopic imaging in combination with a high-speed camera (8000 fps was applied to visualize the phonation onset process. Two different definitions of VO interval were investigated. Six analytical functions were tested that approximate the envelope of the filtered or unfiltered glottal area waveform (GAW during phonation onset. A total of 126 recordings from nine healthy males and 210 recordings from 15 healthy females were evaluated. Three criteria were analyzed to determine the most appropriate computation approach: (1 reliability of the fit function for a correct approximation of VO; (2 consistency represented by the standard deviation of VOT; and (3 accuracy of the approximation of VO. The results suggest the computation of VOT by a fourth-order polynomial approximation in the interval between 32.2 and 67.8% of the saturation amplitude of the filtered GAW.

  9. A fast semi-analytical model for the slotted structure of induction motors

    NARCIS (Netherlands)

    Sprangers, R.L.J.; Paulides, J.J.H.; Gysen, B.L.J.; Lomonova, E.A.

    A fast, semi-analytical model for induction motors (IMs) is presented. In comparison to traditional analytical models for IMs, such as lumped parameter, magnetic equivalent circuit and anisotropic layer models, the presented model calculates a continuous distribution of the magnetic flux density in

  10. Stability and Bifurcation Analysis of a Modified Epidemic Model for Computer Viruses

    Directory of Open Access Journals (Sweden)

    Chuandong Li

    2014-01-01

    Full Text Available We extend the three-dimensional SIR model to four-dimensional case and then analyze its dynamical behavior including stability and bifurcation. It is shown that the new model makes a significant improvement to the epidemic model for computer viruses, which is more reasonable than the most existing SIR models. Furthermore, we investigate the stability of the possible equilibrium point and the existence of the Hopf bifurcation with respect to the delay. By analyzing the associated characteristic equation, it is found that Hopf bifurcation occurs when the delay passes through a sequence of critical values. An analytical condition for determining the direction, stability, and other properties of bifurcating periodic solutions is obtained by using the normal form theory and center manifold argument. The obtained results may provide a theoretical foundation to understand the spread of computer viruses and then to minimize virus risks.

  11. Determining passive cooling limits in CPV using an analytical thermal model

    Science.gov (United States)

    Gualdi, Federico; Arenas, Osvaldo; Vossier, Alexis; Dollet, Alain; Aimez, Vincent; Arès, Richard

    2013-09-01

    We propose an original thermal analytical model aiming to predict the practical limits of passive cooling systems for high concentration photovoltaic modules. The analytical model is described and validated by comparison with a commercial 3D finite element model. The limiting performances of flat plate cooling systems in natural convection are then derived and discussed.

  12. 2D Analytical Modeling of Magnetic Vector Potential in Surface Mounted and Surface Inset Permanent Magnet Machines

    Directory of Open Access Journals (Sweden)

    A. Jabbari

    2017-12-01

    Full Text Available A 2D analytical method for magnetic vector potential calculation in inner rotor surface mounted and surface inset permanent magnet machines considering slotting effects, magnetization orientation and winding layout has been proposed in this paper. The analytical method is based on the resolution of Laplace and Poisson equations as well as Maxwell equation in quasi- Cartesian coordinate by using sub-domain method and hyperbolic functions. The developed method is applied on the performance computation of two prototypes surface mounted permanent magnet motors and two prototypes surface inset permanent magnet motors. A radial and a parallel magnetization orientation is considered for each type of motor. The results of these models are validated through FEM method.

  13. SINGLE PHASE ANALYTICAL MODELS FOR TERRY TURBINE NOZZLE

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua; Zhang, Hongbin; Zou, Ling; O' Brien, James

    2016-11-01

    All BWR RCIC (Reactor Core Isolation Cooling) systems and PWR AFW (Auxiliary Feed Water) systems use Terry turbine, which is composed of the wheel with turbine buckets and several groups of fixed nozzles and reversing chambers inside the turbine casing. The inlet steam is accelerated through the turbine nozzle and impacts on the wheel buckets, generating work to drive the RCIC pump. As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC systems in Fukushima accidents and extend BWR RCIC and PWR AFW operational range and flexibility, mechanistic models for the Terry turbine, based on Sandia National Laboratories’ original work, has been developed and implemented in the RELAP-7 code to simulate the RCIC system. RELAP-7 is a new reactor system code currently under development with the funding support from U.S. Department of Energy. The RELAP-7 code is a fully implicit code and the preconditioned Jacobian-free Newton-Krylov (JFNK) method is used to solve the discretized nonlinear system. This paper presents a set of analytical models for simulating the flow through the Terry turbine nozzles when inlet fluid is pure steam. The implementation of the models into RELAP-7 will be briefly discussed. In the Sandia model, the turbine bucket inlet velocity is provided according to a reduced-order model, which was obtained from a large number of CFD simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine bucket inlet. The models include both adiabatic expansion process inside the nozzle and free expansion process out of the nozzle to reach the ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input conditions for the Terry Turbine rotor model. The nozzle analytical models were validated with experimental data and

  14. Analytic models of plausible gravitational lens potentials

    International Nuclear Information System (INIS)

    Baltz, Edward A.; Marshall, Phil; Oguri, Masamune

    2009-01-01

    Gravitational lenses on galaxy scales are plausibly modelled as having ellipsoidal symmetry and a universal dark matter density profile, with a Sérsic profile to describe the distribution of baryonic matter. Predicting all lensing effects requires knowledge of the total lens potential: in this work we give analytic forms for that of the above hybrid model. Emphasising that complex lens potentials can be constructed from simpler components in linear combination, we provide a recipe for attaining elliptical symmetry in either projected mass or lens potential. We also provide analytic formulae for the lens potentials of Sérsic profiles for integer and half-integer index. We then present formulae describing the gravitational lensing effects due to smoothly-truncated universal density profiles in cold dark matter model. For our isolated haloes the density profile falls off as radius to the minus fifth or seventh power beyond the tidal radius, functional forms that allow all orders of lens potential derivatives to be calculated analytically, while ensuring a non-divergent total mass. We show how the observables predicted by this profile differ from that of the original infinite-mass NFW profile. Expressions for the gravitational flexion are highlighted. We show how decreasing the tidal radius allows stripped haloes to be modelled, providing a framework for a fuller investigation of dark matter substructure in galaxies and clusters. Finally we remark on the need for finite mass halo profiles when doing cosmological ray-tracing simulations, and the need for readily-calculable higher order derivatives of the lens potential when studying catastrophes in strong lenses

  15. Modeling and performance analysis for composite network–compute service provisioning in software-defined cloud environments

    Directory of Open Access Journals (Sweden)

    Qiang Duan

    2015-08-01

    Full Text Available The crucial role of networking in Cloud computing calls for a holistic vision of both networking and computing systems that leads to composite network–compute service provisioning. Software-Defined Network (SDN is a fundamental advancement in networking that enables network programmability. SDN and software-defined compute/storage systems form a Software-Defined Cloud Environment (SDCE that may greatly facilitate composite network–compute service provisioning to Cloud users. Therefore, networking and computing systems need to be modeled and analyzed as composite service provisioning systems in order to obtain thorough understanding about service performance in SDCEs. In this paper, a novel approach for modeling composite network–compute service capabilities and a technique for evaluating composite network–compute service performance are developed. The analytic method proposed in this paper is general and agnostic to service implementation technologies; thus is applicable to a wide variety of network–compute services in SDCEs. The results obtained in this paper provide useful guidelines for federated control and management of networking and computing resources to achieve Cloud service performance guarantees.

  16. Task Analytic Models to Guide Analysis and Design: Use of the Operator Function Model to Represent Pilot-Autoflight System Mode Problems

    Science.gov (United States)

    Degani, Asaf; Mitchell, Christine M.; Chappell, Alan R.; Shafto, Mike (Technical Monitor)

    1995-01-01

    Task-analytic models structure essential information about operator interaction with complex systems, in this case pilot interaction with the autoflight system. Such models serve two purposes: (1) they allow researchers and practitioners to understand pilots' actions; and (2) they provide a compact, computational representation needed to design 'intelligent' aids, e.g., displays, assistants, and training systems. This paper demonstrates the use of the operator function model to trace the process of mode engagements while a pilot is controlling an aircraft via the, autoflight system. The operator function model is a normative and nondeterministic model of how a well-trained, well-motivated operator manages multiple concurrent activities for effective real-time control. For each function, the model links the pilot's actions with the required information. Using the operator function model, this paper describes several mode engagement scenarios. These scenarios were observed and documented during a field study that focused on mode engagements and mode transitions during normal line operations. Data including time, ATC clearances, altitude, system states, and active modes and sub-modes, engagement of modes, were recorded during sixty-six flights. Using these data, seven prototypical mode engagement scenarios were extracted. One scenario details the decision of the crew to disengage a fully automatic mode in favor of a semi-automatic mode, and the consequences of this action. Another describes a mode error involving updating aircraft speed following the engagement of a speed submode. Other scenarios detail mode confusion at various phases of the flight. This analysis uses the operator function model to identify three aspects of mode engagement: (1) the progress of pilot-aircraft-autoflight system interaction; (2) control/display information required to perform mode management activities; and (3) the potential cause(s) of mode confusion. The goal of this paper is twofold

  17. Computational modeling of ultra-short-pulse ablation of enamel

    Energy Technology Data Exchange (ETDEWEB)

    London, R.A.; Bailey, D.S.; Young, D.A. [and others

    1996-02-29

    A computational model for the ablation of tooth enamel by ultra-short laser pulses is presented. The role of simulations using this model in designing and understanding laser drilling systems is discussed. Pulses of duration 300 sec and intensity greater than 10{sup 12} W/cm{sup 2} are considered. Laser absorption proceeds via multi-photon initiated plasma mechanism. The hydrodynamic response is calculated with a finite difference method, using an equation of state constructed from thermodynamic functions including electronic, ion motion, and chemical binding terms. Results for the ablation efficiency are presented. An analytic model describing the ablation threshold and ablation depth is presented. Thermal coupling to the remaining tissue and long-time thermal conduction are calculated. Simulation results are compared to experimental measurements of the ablation efficiency. Desired improvements in the model are presented.

  18. Computer modeling of jet mixing in INEL waste tanks

    International Nuclear Information System (INIS)

    Meyer, P.A.

    1994-01-01

    The objective of this study is to examine the feasibility of using submerged jet mixing pumps to mobilize and suspend settled sludge materials in INEL High Level Radioactive Waste Tanks. Scenarios include removing the heel (a shallow liquid and sludge layer remaining after tank emptying processes) and mobilizing and suspending solids in full or partially full tanks. The approach used was to (1) briefly review jet mixing theory, (2) review erosion literature in order to identify and estimate important sludge characterization parameters (3) perform computer modeling of submerged liquid mixing jets in INEL tank geometries, (4) develop analytical models from which pump operating conditions and mixing times can be estimated, and (5) analyze model results to determine overall feasibility of using jet mixing pumps and make design recommendations

  19. The Potential Value of Clostridium difficile Vaccine: An Economic Computer Simulation Model

    Science.gov (United States)

    Lee, Bruce Y.; Popovich, Michael J.; Tian, Ye; Bailey, Rachel R.; Ufberg, Paul J.; Wiringa, Ann E.; Muder, Robert R.

    2010-01-01

    Efforts are currently underway to develop a vaccine against Clostridium difficile infection (CDI). We developed two decision analytic Monte Carlo computer simulation models: (1) an Initial Prevention Model depicting the decision whether to administer C. difficile vaccine to patients at-risk for CDI and (2) a Recurrence Prevention Model depicting the decision whether to administer C. difficile vaccine to prevent CDI recurrence. Our results suggest that a C. difficile vaccine could be cost-effective over a wide range of C. difficile risk, vaccine costs, and vaccine efficacies especially when being used post-CDI treatment to prevent recurrent disease. PMID:20541582

  20. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    Science.gov (United States)

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  1. Wind Farm Layout Optimization through a Crossover-Elitist Evolutionary Algorithm performed over a High Performing Analytical Wake Model

    Science.gov (United States)

    Kirchner-Bossi, Nicolas; Porté-Agel, Fernando

    2017-04-01

    Wind turbine wakes can significantly disrupt the performance of further downstream turbines in a wind farm, thus seriously limiting the overall wind farm power output. Such effect makes the layout design of a wind farm to play a crucial role on the whole performance of the project. An accurate definition of the wake interactions added to a computationally compromised layout optimization strategy can result in an efficient resource when addressing the problem. This work presents a novel soft-computing approach to optimize the wind farm layout by minimizing the overall wake effects that the installed turbines exert on one another. An evolutionary algorithm with an elitist sub-optimization crossover routine and an unconstrained (continuous) turbine positioning set up is developed and tested over an 80-turbine offshore wind farm over the North Sea off Denmark (Horns Rev I). Within every generation of the evolution, the wind power output (cost function) is computed through a recently developed and validated analytical wake model with a Gaussian profile velocity deficit [1], which has shown to outperform the traditionally employed wake models through different LES simulations and wind tunnel experiments. Two schemes with slightly different perimeter constraint conditions (full or partial) are tested. Results show, compared to the baseline, gridded layout, a wind power output increase between 5.5% and 7.7%. In addition, it is observed that the electric cable length at the facilities is reduced by up to 21%. [1] Bastankhah, Majid, and Fernando Porté-Agel. "A new analytical model for wind-turbine wakes." Renewable Energy 70 (2014): 116-123.

  2. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  3. Model and Analytic Processes for Export License Assessments

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Sandra E.; Whitney, Paul D.; Weimar, Mark R.; Wood, Thomas W.; Daly, Don S.; Brothers, Alan J.; Sanfilippo, Antonio P.; Cook, Diane; Holder, Larry

    2011-09-29

    This paper represents the Department of Energy Office of Nonproliferation Research and Development (NA-22) Simulations, Algorithms and Modeling (SAM) Program's first effort to identify and frame analytical methods and tools to aid export control professionals in effectively predicting proliferation intent; a complex, multi-step and multi-agency process. The report focuses on analytical modeling methodologies that alone, or combined, may improve the proliferation export control license approval process. It is a follow-up to an earlier paper describing information sources and environments related to international nuclear technology transfer. This report describes the decision criteria used to evaluate modeling techniques and tools to determine which approaches will be investigated during the final 2 years of the project. The report also details the motivation for why new modeling techniques and tools are needed. The analytical modeling methodologies will enable analysts to evaluate the information environment for relevance to detecting proliferation intent, with specific focus on assessing risks associated with transferring dual-use technologies. Dual-use technologies can be used in both weapons and commercial enterprises. A decision-framework was developed to evaluate which of the different analytical modeling methodologies would be most appropriate conditional on the uniqueness of the approach, data availability, laboratory capabilities, relevance to NA-22 and Office of Arms Control and Nonproliferation (NA-24) research needs and the impact if successful. Modeling methodologies were divided into whether they could help micro-level assessments (e.g., help improve individual license assessments) or macro-level assessment. Macro-level assessment focuses on suppliers, technology, consumers, economies, and proliferation context. Macro-level assessment technologies scored higher in the area of uniqueness because less work has been done at the macro level. An

  4. Applying analytic hierarchy process to assess healthcare-oriented cloud computing service systems.

    Science.gov (United States)

    Liao, Wen-Hwa; Qiu, Wan-Li

    2016-01-01

    Numerous differences exist between the healthcare industry and other industries. Difficulties in the business operation of the healthcare industry have continually increased because of the volatility and importance of health care, changes to and requirements of health insurance policies, and the statuses of healthcare providers, which are typically considered not-for-profit organizations. Moreover, because of the financial risks associated with constant changes in healthcare payment methods and constantly evolving information technology, healthcare organizations must continually adjust their business operation objectives; therefore, cloud computing presents both a challenge and an opportunity. As a response to aging populations and the prevalence of the Internet in fast-paced contemporary societies, cloud computing can be used to facilitate the task of balancing the quality and costs of health care. To evaluate cloud computing service systems for use in health care, providing decision makers with a comprehensive assessment method for prioritizing decision-making factors is highly beneficial. Hence, this study applied the analytic hierarchy process, compared items related to cloud computing and health care, executed a questionnaire survey, and then classified the critical factors influencing healthcare cloud computing service systems on the basis of statistical analyses of the questionnaire results. The results indicate that the primary factor affecting the design or implementation of optimal cloud computing healthcare service systems is cost effectiveness, with the secondary factors being practical considerations such as software design and system architecture.

  5. TU-F-17A-03: An Analytical Respiratory Perturbation Model for Lung Motion Prediction

    International Nuclear Information System (INIS)

    Li, G; Yuan, A; Wei, J

    2014-01-01

    Purpose: Breathing irregularity is common, causing unreliable prediction in tumor motion for correlation-based surrogates. Both tidal volume (TV) and breathing pattern (BP=ΔVthorax/TV, where TV=ΔVthorax+ΔVabdomen) affect lung motion in anterior-posterior and superior-inferior directions. We developed a novel respiratory motion perturbation (RMP) model in analytical form to account for changes in TV and BP in motion prediction from simulation to treatment. Methods: The RMP model is an analytical function of patient-specific anatomic and physiologic parameters. It contains a base-motion trajectory d(x,y,z) derived from a 4-dimensional computed tomography (4DCT) at simulation and a perturbation term Δd(ΔTV,ΔBP) accounting for deviation at treatment from simulation. The perturbation is dependent on tumor-specific location and patient-specific anatomy. Eleven patients with simulation and treatment 4DCT images were used to assess the RMP method in motion prediction from 4DCT1 to 4DCT2, and vice versa. For each patient, ten motion trajectories of corresponding points in the lower lobes were measured in both 4DCTs: one served as the base-motion trajectory and the other as the ground truth for comparison. In total, 220 motion trajectory predictions were assessed. The motion discrepancy between two 4DCTs for each patient served as a control. An established 5D motion model was used for comparison. Results: The average absolute error of RMP model prediction in superior-inferior direction is 1.6±1.8 mm, similar to 1.7±1.6 mm from the 5D model (p=0.98). Some uncertainty is associated with limited spatial resolution (2.5mm slice thickness) and temporal resolution (10-phases). Non-corrected motion discrepancy between two 4DCTs is 2.6±2.7mm, with the maximum of ±20mm, and correction is necessary (p=0.01). Conclusion: The analytical motion model predicts lung motion with accuracy similar to the 5D model. The analytical model is based on physical relationships, requires no

  6. Hydraulic modeling of riverbank filtration systems with curved boundaries using analytic elements and series solutions

    Science.gov (United States)

    Bakker, Mark

    2010-08-01

    A new analytic solution approach is presented for the modeling of steady flow to pumping wells near rivers in strip aquifers; all boundaries of the river and strip aquifer may be curved. The river penetrates the aquifer only partially and has a leaky stream bed. The water level in the river may vary spatially. Flow in the aquifer below the river is semi-confined while flow in the aquifer adjacent to the river is confined or unconfined and may be subject to areal recharge. Analytic solutions are obtained through superposition of analytic elements and Fourier series. Boundary conditions are specified at collocation points along the boundaries. The number of collocation points is larger than the number of coefficients in the Fourier series and a solution is obtained in the least squares sense. The solution is analytic while boundary conditions are met approximately. Very accurate solutions are obtained when enough terms are used in the series. Several examples are presented for domains with straight and curved boundaries, including a well pumping near a meandering river with a varying water level. The area of the river bottom where water infiltrates into the aquifer is delineated and the fraction of river water in the well water is computed for several cases.

  7. X-ray beam-shaping via deformable mirrors: Analytical computation of the required mirror profile

    International Nuclear Information System (INIS)

    Spiga, Daniele; Raimondi, Lorenzo; Svetina, Cristian; Zangrando, Marco

    2013-01-01

    X-ray mirrors with high focusing performances are in use in both mirror modules for X-ray telescopes and in synchrotron and FEL (Free Electron Laser) beamlines. A degradation of the focus sharpness arises in general from geometrical deformations and surface roughness, the former usually described by geometrical optics and the latter by physical optics. In general, technological developments are aimed at a very tight focusing, which requires the mirror profile to comply with the nominal shape as much as possible and to keep the roughness at a negligible level. However, a deliberate deformation of the mirror can be made to endow the focus with a desired size and distribution, via piezo actuators as done at the EIS-TIMEX beamline of FERMI@Elettra. The resulting profile can be characterized with a Long Trace Profilometer and correlated with the expected optical quality via a wavefront propagation code. However, if the roughness contribution can be neglected, the computation can be performed via a ray-tracing routine, and, under opportune assumptions, the focal spot profile (the Point Spread Function, PSF) can even be predicted analytically. The advantage of this approach is that the analytical relation can be reversed; i.e., from the desired PSF the required mirror profile can be computed easily, thereby avoiding the use of complex and time-consuming numerical codes. The method can also be suited in the case of spatially inhomogeneous beam intensities, as commonly experienced at synchrotrons and FELs. In this work we expose the analytical method and the application to the beam shaping problem

  8. Simulation of a welding process in polyduct pipelines resolved with a finite elements computational model. Comparison with analytical solutions and tests with thermocouples

    International Nuclear Information System (INIS)

    Sanzi, H; Elvira, G; Kloster, M; Asta, E; Zalazar, M

    2006-01-01

    All welding processes induce deformations and thermal tensions, which must be evaluated correctly since they can influence a component's structural integrity. This work determines the distribution of temperatures that develop during a manual welding process with shielded electrodes (SMAW), on the circumference seam of a pipe for use in polyducts. A simplified model of Finite Elements (FEA) using three dimensional solids is proposed for the study. The analysis considers that while the welding process is underway, no heat is lost into the environment, that is, adiabatic conditions are considered, and the transformations produced in the material due to phase changes do not produce modifications in the properties of the supporting or base materials. The results of the simulation are compared with those obtained by recent analytical studies developed by different investigators, such as Nguyen, Ohta, Matsuoka, Suzuki and Taeda, where a continuously moving three dimensional double ellipsoidal source was used. The results are then compared with the experimental results by measuring with thermocouples. This study reveals the sensitivity and the validity of the proposed computer model, and in a second stage optimizes the engineering times for the resolution of a problem like the one presented in order to design the corresponding welding procedure (CW)

  9. Analytical Modelling and Simulation of Photovoltaic Panels and Arrays

    Directory of Open Access Journals (Sweden)

    H. Bourdoucen

    2007-12-01

    Full Text Available In this paper, an analytical model for PV panels and arrays based on extracted physical parameters of solar cells is developed. The proposed model has the advantage of simplifying mathematical modelling for different configurations of cells and panels without losing efficiency of PV system operation. The effects of external parameters, mainly temperature and solar irradiance have been considered in the modelling. Due to their critical effects on the operation of the panel, effects of series and shunt resistances were also studied. The developed analytical model has been easily implemented, simulated and validated using both Spice and Matlab packages for different series and parallel configurations of cells and panels. The results obtained with these two programs are in total agreement, which make the proposed model very useful for researchers and designers for quick and accurate sizing of PV panels and arrays.

  10. Cloud Computing: A model Construct of Real-Time Monitoring for Big Dataset Analytics Using Apache Spark

    Science.gov (United States)

    Alkasem, Ameen; Liu, Hongwei; Zuo, Decheng; Algarash, Basheer

    2018-01-01

    The volume of data being collected, analyzed, and stored has exploded in recent years, in particular in relation to the activity on the cloud computing. While large-scale data processing, analysis, storage, and platform model such as cloud computing were previously and currently are increasingly. Today, the major challenge is it address how to monitor and control these massive amounts of data and perform analysis in real-time at scale. The traditional methods and model systems are unable to cope with these quantities of data in real-time. Here we present a new methodology for constructing a model for optimizing the performance of real-time monitoring of big datasets, which includes a machine learning algorithms and Apache Spark Streaming to accomplish fine-grained fault diagnosis and repair of big dataset. As a case study, we use the failure of Virtual Machines (VMs) to start-up. The methodology proposition ensures that the most sensible action is carried out during the procedure of fine-grained monitoring and generates the highest efficacy and cost-saving fault repair through three construction control steps: (I) data collection; (II) analysis engine and (III) decision engine. We found that running this novel methodology can save a considerate amount of time compared to the Hadoop model, without sacrificing the classification accuracy or optimization of performance. The accuracy of the proposed method (92.13%) is an improvement on traditional approaches.

  11. Variations on Debris Disks. IV. An Improved Analytical Model for Collisional Cascades

    Science.gov (United States)

    Kenyon, Scott J.; Bromley, Benjamin C.

    2017-04-01

    We derive a new analytical model for the evolution of a collisional cascade in a thin annulus around a single central star. In this model, r max the size of the largest object changes with time, {r}\\max \\propto {t}-γ , with γ ≈ 0.1-0.2. Compared to standard models where r max is constant in time, this evolution results in a more rapid decline of M d , the total mass of solids in the annulus, and L d , the luminosity of small particles in the annulus: {M}d\\propto {t}-(γ +1) and {L}d\\propto {t}-(γ /2+1). We demonstrate that the analytical model provides an excellent match to a comprehensive suite of numerical coagulation simulations for annuli at 1 au and at 25 au. If the evolution of real debris disks follows the predictions of the analytical or numerical models, the observed luminosities for evolved stars require up to a factor of two more mass than predicted by previous analytical models.

  12. Analytical estimates of structural behavior

    CERN Document Server

    Dym, Clive L

    2012-01-01

    Explicitly reintroducing the idea of modeling to the analysis of structures, Analytical Estimates of Structural Behavior presents an integrated approach to modeling and estimating the behavior of structures. With the increasing reliance on computer-based approaches in structural analysis, it is becoming even more important for structural engineers to recognize that they are dealing with models of structures, not with the actual structures. As tempting as it is to run innumerable simulations, closed-form estimates can be effectively used to guide and check numerical results, and to confirm phys

  13. Computational models of neuromodulation.

    Science.gov (United States)

    Fellous, J M; Linster, C

    1998-05-15

    Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.

  14. An analytical model for droplet separation in vane separators and measurements of grade efficiency and pressure drop

    International Nuclear Information System (INIS)

    Koopman, Hans K.; Köksoy, Çağatay; Ertunç, Özgür; Lienhart, Hermann; Hedwig, Heinz; Delgado, Antonio

    2014-01-01

    Highlights: • An analytical model for efficiency is extended with additional geometrical features. • A simplified and a novel vane separator design are investigated experimentally. • Experimental results are significantly affected by re-entrainment effects. • Outlet droplet size spectra are accurately predicted by the model. • The improved grade efficiency doubles the pressure drop. - Abstract: This study investigates the predictive power of analytical models for the droplet separation efficiency of vane separators and compares experimental results of two different vane separator geometries. The ability to predict the separation efficiency of vane separators simplifies their design process, especially when analytical research allows the identification of the most important physical and geometrical parameters and can quantify their contribution. In this paper, an extension of a classical analytical model for separation efficiency is proposed that accounts for the contributions provided by straight wall sections. The extension of the analytical model is benchmarked against experiments performed by Leber (2003) on a single stage straight vane separator. The model is in very reasonable agreement with the experimental values. Results from the analytical model are also compared with experiments performed on a vane separator of simplified geometry (VS-1). The experimental separation efficiencies, computed from the measured liquid mass balances, are significantly below the model predictions, which lie arbitrarily close to unity. This difference is attributed to re-entrainment through film detachment from the last stage of the vane separators. After adjustment for re-entrainment effects, by applying a cut-off filter to the outlet droplet size spectra, the experimental and theoretical outlet Sauter mean diameters show very good agreement. A novel vane separator geometry of patented design (VS-2) is also investigated, comparing experimental results with VS-1

  15. Predictive analytics and child protection: constraints and opportunities.

    Science.gov (United States)

    Russell, Jesse

    2015-08-01

    This paper considers how predictive analytics might inform, assist, and improve decision making in child protection. Predictive analytics represents recent increases in data quantity and data diversity, along with advances in computing technology. While the use of data and statistical modeling is not new to child protection decision making, its use in child protection is experiencing growth, and efforts to leverage predictive analytics for better decision-making in child protection are increasing. Past experiences, constraints and opportunities are reviewed. For predictive analytics to make the most impact on child protection practice and outcomes, it must embrace established criteria of validity, equity, reliability, and usefulness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Review of analytical models to stream depletion induced by pumping: Guide to model selection

    Science.gov (United States)

    Huang, Ching-Sheng; Yang, Tao; Yeh, Hund-Der

    2018-06-01

    Stream depletion due to groundwater extraction by wells may cause impact on aquatic ecosystem in streams, conflict over water rights, and contamination of water from irrigation wells near polluted streams. A variety of studies have been devoted to addressing the issue of stream depletion, but a fundamental framework for analytical modeling developed from aquifer viewpoint has not yet been found. This review shows key differences in existing models regarding the stream depletion problem and provides some guidelines for choosing a proper analytical model in solving the problem of concern. We introduce commonly used models composed of flow equations, boundary conditions, well representations and stream treatments for confined, unconfined, and leaky aquifers. They are briefly evaluated and classified according to six categories of aquifer type, flow dimension, aquifer domain, stream representation, stream channel geometry, and well type. Finally, we recommend promising analytical approaches that can solve stream depletion problem in reality with aquifer heterogeneity and irregular geometry of stream channel. Several unsolved stream depletion problems are also recommended.

  17. Symbolic computation of analytic approximate solutions for nonlinear fractional differential equations

    Science.gov (United States)

    Lin, Yezhi; Liu, Yinping; Li, Zhibin

    2013-01-01

    The Adomian decomposition method (ADM) is one of the most effective methods to construct analytic approximate solutions for nonlinear differential equations. In this paper, based on the new definition of the Adomian polynomials, Rach (2008) [22], the Adomian decomposition method and the Padé approximants technique, a new algorithm is proposed to construct analytic approximate solutions for nonlinear fractional differential equations with initial or boundary conditions. Furthermore, a MAPLE software package is developed to implement this new algorithm, which is user-friendly and efficient. One only needs to input the system equation, initial or boundary conditions and several necessary parameters, then our package will automatically deliver the analytic approximate solutions within a few seconds. Several different types of examples are given to illustrate the scope and demonstrate the validity of our package, especially for non-smooth initial value problems. Our package provides a helpful and easy-to-use tool in science and engineering simulations. Program summaryProgram title: ADMP Catalogue identifier: AENE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12011 No. of bytes in distributed program, including test data, etc.: 575551 Distribution format: tar.gz Programming language: MAPLE R15. Computer: PCs. Operating system: Windows XP/7. RAM: 2 Gbytes Classification: 4.3. Nature of problem: Constructing analytic approximate solutions of nonlinear fractional differential equations with initial or boundary conditions. Non-smooth initial value problems can be solved by this program. Solution method: Based on the new definition of the Adomian polynomials [1], the Adomian decomposition method and the Pad

  18. Analytical Model for Hook Anchor Pull-Out

    DEFF Research Database (Denmark)

    Brincker, Rune; Ulfkjær, Jens Peder; Adamsen, Peter

    1995-01-01

    A simple analytical model for the pull-out of a hook anchor is presented. The model is based on a simplified version of the fictitious crack model. It is assumed that the fracture process is the pull-off of a cone shaped concrete part, simplifying the problem by assuming pure rigid body motions...... allowing elastic deformations only in a layer between the pull-out cone and the concrete base. The derived model is in good agreement with experimental results, it predicts size effects and the model parameters found by calibration of the model on experimental data are in good agreement with what should...

  19. Analytical Model for Hook Anchor Pull-out

    DEFF Research Database (Denmark)

    Brincker, Rune; Ulfkjær, J. P.; Adamsen, P.

    A simple analytical model for the pull-out of a hook anchor is presented. The model is based on a simplified version of the fictitious crack model. It is assumed that the fracture process is the pull-off of a cone shaped concrete part, simplifying the problem by assuming pure rigid body motions...... allowing elastic deformations only in a layer between the pull-out cone and the concrete base. The derived model is in good agreement with experimental results, it predicts size effects and the model parameters found by calibration of the model on experimental data are in good agreement with what should...

  20. Computational modeling of geometry dependent phonon transport in silicon nanostructures

    Science.gov (United States)

    Cheney, Drew A.

    Recent experiments have demonstrated that thermal properties of semiconductor nanostructures depend on nanostructure boundary geometry. Phonons are quantized mechanical vibrations that are the dominant carrier of heat in semiconductor materials and their aggregate behavior determine a nanostructure's thermal performance. Phonon-geometry scattering processes as well as waveguiding effects which result from coherent phonon interference are responsible for the shape dependence of thermal transport in these systems. Nanoscale phonon-geometry interactions provide a mechanism by which nanostructure geometry may be used to create materials with targeted thermal properties. However, the ability to manipulate material thermal properties via controlling nanostructure geometry is contingent upon first obtaining increased theoretical understanding of fundamental geometry induced phonon scattering processes and having robust analytical and computational models capable of exploring the nanostructure design space, simulating the phonon scattering events, and linking the behavior of individual phonon modes to overall thermal behavior. The overall goal of this research is to predict and analyze the effect of nanostructure geometry on thermal transport. To this end, a harmonic lattice-dynamics based atomistic computational modeling tool was created to calculate phonon spectra and modal phonon transmission coefficients in geometrically irregular nanostructures. The computational tool is used to evaluate the accuracy and regimes of applicability of alternative computational techniques based upon continuum elastic wave theory. The model is also used to investigate phonon transmission and thermal conductance in diameter modulated silicon nanowires. Motivated by the complexity of the transmission results, a simplified model based upon long wavelength beam theory was derived and helps explain geometry induced phonon scattering of low frequency nanowire phonon modes.

  1. Definition, modeling and simulation of a grid computing system for high throughput computing

    CERN Document Server

    Caron, E; Tsaregorodtsev, A Yu

    2006-01-01

    In this paper, we study and compare grid and global computing systems and outline the benefits of having an hybrid system called dirac. To evaluate the dirac scheduling for high throughput computing, a new model is presented and a simulator was developed for many clusters of heterogeneous nodes belonging to a local network. These clusters are assumed to be connected to each other through a global network and each cluster is managed via a local scheduler which is shared by many users. We validate our simulator by comparing the experimental and analytical results of a M/M/4 queuing system. Next, we do the comparison with a real batch system and we obtain an average error of 10.5% for the response time and 12% for the makespan. We conclude that the simulator is realistic and well describes the behaviour of a large-scale system. Thus we can study the scheduling of our system called dirac in a high throughput context. We justify our decentralized, adaptive and oppor! tunistic approach in comparison to a centralize...

  2. Analytic expressions for the construction of a fire event PSA model

    International Nuclear Information System (INIS)

    Kang, Dae Il; Kim, Kil Yoo; Kim, Dong San; Hwang, Mee Jeong; Yang, Joon Eon

    2016-01-01

    In this study, the changing process of an internal event PSA model to a fire event PSA model is analytically presented and discussed. Many fire PSA models have fire induced initiating event fault trees not shown in an internal event PSA model. Fire-induced initiating fault tree models are developed for addressing multiple initiating event issues. A single fire event within a fire compartment or fire scenario can cause multiple initiating events. As an example, a fire in a turbine building area can cause a loss of the main feed-water and loss of off-site power initiating events. Up to now, there has been no analytic study on the construction of a fire event PSA model using an internal event PSA model with fault trees of initiating events. In this paper, the changing process of an internal event PSA model to a fire event PSA model was analytically presented and discussed. This study results show that additional cutsets can be obtained if the fault trees of initiating events for a fire event PSA model are not exactly developed.

  3. Analytic expressions for the construction of a fire event PSA model

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dae Il; Kim, Kil Yoo; Kim, Dong San; Hwang, Mee Jeong; Yang, Joon Eon [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this study, the changing process of an internal event PSA model to a fire event PSA model is analytically presented and discussed. Many fire PSA models have fire induced initiating event fault trees not shown in an internal event PSA model. Fire-induced initiating fault tree models are developed for addressing multiple initiating event issues. A single fire event within a fire compartment or fire scenario can cause multiple initiating events. As an example, a fire in a turbine building area can cause a loss of the main feed-water and loss of off-site power initiating events. Up to now, there has been no analytic study on the construction of a fire event PSA model using an internal event PSA model with fault trees of initiating events. In this paper, the changing process of an internal event PSA model to a fire event PSA model was analytically presented and discussed. This study results show that additional cutsets can be obtained if the fault trees of initiating events for a fire event PSA model are not exactly developed.

  4. A theoretical study of CsI:Tl columnar scintillator image quality parameters by analytical modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalyvas, N., E-mail: nkalyvas@teiath.gr; Valais, I.; Michail, C.; Fountos, G.; Kandarakis, I.; Cavouras, D.

    2015-04-11

    Medical X-ray digital imaging systems such as mammography, radiography and computed tomography (CT), are composed from efficient radiation detectors, which can transform the X-rays to electron signal. Scintillators are materials that emit light when excited by X-rays and incorporated in X-ray medical imaging detectors. Columnar scintillator, like CsI:T1 is very often used for X-ray detection due to its higher performance. The columnar form limits the lateral spread of the optical photons to the scintillator output, thus it demonstrates superior spatial resolution compared to granular scintillators. The aim of this work is to provide an analytical model for calculating the MTF, the DQE and the emission efficiency of a columnar scintillator. The model parameters were validated against published Monte Carlo data. The model was able to predict the overall performance of CsI:Tl scintillators and suggested an optimum thickness of 300 μm for radiography applications. - Highlights: • An analytical model for calculating MTF, DQE and Detector Optical Gain (DOG) of columnar phosphors was developed. • The model was fitted to published efficiency and MTF Monte Carlo data. • A good fit was observed for 300 µm columnar CsI:Tl thickness. • The performance of the 300 µm column thickness CsI:Tl was better in terms of MTF and DOG for radiographic applications.

  5. Symbolic computation of nonlinear wave interactions on MACSYMA

    International Nuclear Information System (INIS)

    Bers, A.; Kulp, J.L.; Karney, C.F.F.

    1976-01-01

    In this paper the use of a large symbolic computation system - MACSYMA - in determining approximate analytic expressions for the nonlinear coupling of waves in an anisotropic plasma is described. MACSYMA was used to implement the solutions of a fluid plasma model nonlinear partial differential equations by perturbation expansions and subsequent iterative analytic computations. By interacting with the details of the symbolic computation, the physical processes responsible for particular nonlinear wave interactions could be uncovered and appropriate approximations introduced so as to simplify the final analytic result. Details of the MACSYMA system and its use are discussed and illustrated. (Auth.)

  6. Analytical solution of dispersion relations for the nuclear optical model

    Energy Technology Data Exchange (ETDEWEB)

    VanderKam, J.M. [Center for Communications Research, Thanet Road, Princeton, NJ 08540 (United States); Weisel, G.J. [Triangle Universities Nuclear Laboratory, and Duke University, Box 90308, Durham, NC 27708-0308 (United States); Penn State Altoona, 3000 Ivyside Park, Altoona, PA 16601-3760 (United States); Tornow, W. [Triangle Universities Nuclear Laboratory, and Duke University, Box 90308, Durham, NC 27708-0308 (United States)

    2000-12-01

    Analytical solutions of dispersion integral relations, linking the real and imaginary parts of the nuclear optical model, have been derived. These are displayed for some widely used forms of the volume- and surface-absorptive nuclear potentials. When the analytical solutions are incorporated into the optical-model search code GENOA, replacing a numerical integration, the code runs three and a half to seven times faster, greatly aiding the analysis of direct-reaction, elastic scattering data. (author)

  7. An analytical model for annular flow boiling heat transfer in microchannel heat sinks

    International Nuclear Information System (INIS)

    Megahed, A.; Hassan, I.

    2009-01-01

    An analytical model has been developed to predict flow boiling heat transfer coefficient in microchannel heat sinks. The new analytical model is proposed to predict the two-phase heat transfer coefficient during annular flow regime based on the separated model. Opposing to the majority of annular flow heat transfer models, the model is based on fundamental conservation principles. The model considers the characteristics of microchannel heat sink during annular flow and eliminates using any empirical closure relations. Comparison with limited experimental data was found to validate the usefulness of this analytical model. The model predicts the experimental data with a mean absolute error 8%. (author)

  8. Analytic models for beam propagation and far-field patterns in slab and bow-tie x-ray lasers

    International Nuclear Information System (INIS)

    Chandler, E.A.

    1994-06-01

    Simplified analytic models for beam propagation in slab and bow-tie x-ray lasers yield convenient expressions that provide both a framework for guidance in computer modeling and useful approximates for experimenters. In unrefracted bow-tie lasers, the laser shape in conjunction with the nearly-exponential weighting of rays according to their length produces a small effective aperture for the signal. We develop an analytic expression for the aperture and the properties of the far-field signal. Similarly, we develop the view that the far-field pattern of refractive slab lasers is the result of effective apertures that are created by the interplay of refraction and exponential amplification. We present expressions for the size of this aperture as a function of laser parameters as well as for the intensity and position of the far-field lineout. This analysis also yields conditions for the refraction limit in slab lasers and an estimate for the signal loss due to refraction

  9. Analytical dose modeling for preclinical proton irradiation of millimetric targets.

    Science.gov (United States)

    Vanstalle, Marie; Constanzo, Julie; Karakaya, Yusuf; Finck, Christian; Rousseau, Marc; Brasse, David

    2018-01-01

    Due to the considerable development of proton radiotherapy, several proton platforms have emerged to irradiate small animals in order to study the biological effectiveness of proton radiation. A dedicated analytical treatment planning tool was developed in this study to accurately calculate the delivered dose given the specific constraints imposed by the small dimensions of the irradiated areas. The treatment planning system (TPS) developed in this study is based on an analytical formulation of the Bragg peak and uses experimental range values of protons. The method was validated after comparison with experimental data from the literature and then compared to Monte Carlo simulations conducted using Geant4. Three examples of treatment planning, performed with phantoms made of water targets and bone-slab insert, were generated with the analytical formulation and Geant4. Each treatment planning was evaluated using dose-volume histograms and gamma index maps. We demonstrate the value of the analytical function for mouse irradiation, which requires a targeting accuracy of 0.1 mm. Using the appropriate database, the analytical modeling limits the errors caused by misestimating the stopping power. For example, 99% of a 1-mm tumor irradiated with a 24-MeV beam receives the prescribed dose. The analytical dose deviations from the prescribed dose remain within the dose tolerances stated by report 62 of the International Commission on Radiation Units and Measurements for all tested configurations. In addition, the gamma index maps show that the highly constrained targeting accuracy of 0.1 mm for mouse irradiation leads to a significant disagreement between Geant4 and the reference. This simulated treatment planning is nevertheless compatible with a targeting accuracy exceeding 0.2 mm, corresponding to rat and rabbit irradiations. Good dose accuracy for millimetric tumors is achieved with the analytical calculation used in this work. These volume sizes are typical in mouse

  10. SIMMER-III analytic thermophysical property model

    International Nuclear Information System (INIS)

    Morita, K; Tobita, Y.; Kondo, Sa.; Fischer, E.A.

    1999-05-01

    An analytic thermophysical property model using general function forms is developed for a reactor safety analysis code, SIMMER-III. The function forms are designed to represent correct behavior of properties of reactor-core materials over wide temperature ranges, especially for the thermal conductivity and the viscosity near the critical point. The most up-to-date and reliable sources for uranium dioxide, mixed-oxide fuel, stainless steel, and sodium available at present are used to determine parameters in the proposed functions. This model is also designed to be consistent with a SIMMER-III model on thermodynamic properties and equations of state for reactor-core materials. (author)

  11. An analytical model for the assessment of airline expansion strategies

    Directory of Open Access Journals (Sweden)

    Mauricio Emboaba Moreira

    2014-01-01

    Full Text Available Purpose: The purpose of this article is to develop an analytical model to assess airline expansion strategies by combining generic business strategy models with airline business models. Methodology and approach: A number of airline business models are examined, as are Porter’s (1983 industry five forces that drive competition, complemented by Nalebuff/ Brandenburger’s  (1996 sixth force, and the basic elements of the general environment in which the expansion process takes place.  A system of points and weights is developed to create a score among the 904,736 possible combinations considered. The model’s outputs are generic expansion strategies with quantitative assessments for each specific combination of elements inputted. Originality and value: The analytical model developed is original because it combines for the first time and explicitly elements of the general environment, industry environment, airline business models and the generic expansion strategy types. Besides it creates a system of scores that may be used to drive the decision process toward the choice of a specific strategic expansion path. Research implications: The analytical model may be adapted to other industries apart from the airline industry by substituting the element “airline business model” by other industries corresponding elements related to the different specific business models.

  12. Analytical Chemistry Division's sample transaction system

    International Nuclear Information System (INIS)

    Stanton, J.S.; Tilson, P.A.

    1980-10-01

    The Analytical Chemistry Division uses the DECsystem-10 computer for a wide range of tasks: sample management, timekeeping, quality assurance, and data calculation. This document describes the features and operating characteristics of many of the computer programs used by the Division. The descriptions are divided into chapters which cover all of the information about one aspect of the Analytical Chemistry Division's computer processing

  13. Plasticity: modeling & computation

    National Research Council Canada - National Science Library

    Borja, Ronaldo Israel

    2013-01-01

    .... "Plasticity Modeling & Computation" is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids...

  14. A Unified Channel Charges Expression for Analytic MOSFET Modeling

    Directory of Open Access Journals (Sweden)

    Hugues Murray

    2012-01-01

    Full Text Available Based on a 1D Poissons equation resolution, we present an analytic model of inversion charges allowing calculation of the drain current and transconductance in the Metal Oxide Semiconductor Field Effect Transistor. The drain current and transconductance are described by analytical functions including mobility corrections and short channel effects (CLM, DIBL. The comparison with the Pao-Sah integral shows excellent accuracy of the model in all inversion modes from strong to weak inversion in submicronics MOSFET. All calculations are encoded with a simple C program and give instantaneous results that provide an efficient tool for microelectronics users.

  15. Applying computer modeling to eddy current signal analysis for steam generator and heat exchanger tube inspections

    International Nuclear Information System (INIS)

    Sullivan, S.P.; Cecco, V.S.; Carter, J.R.; Spanner, M.; McElvanney, M.; Krause, T.W.; Tkaczyk, R.

    2000-01-01

    Licensing requirements for eddy current inspections for nuclear steam generators and heat exchangers are becoming increasingly stringent. The traditional industry-standard method of comparing inspection signals with flaw signals from simple in-line calibration standards is proving to be inadequate. A more complete understanding of eddy current and magnetic field interactions with flaws and other anomalies is required for the industry to generate consistently reliable inspections. Computer modeling is a valuable tool in improving the reliability of eddy current signal analysis. Results from computer modeling are helping inspectors to properly discriminate between real flaw signals and false calls, and improving reliability in flaw sizing. This presentation will discuss complementary eddy current computer modeling techniques such as the Finite Element Method (FEM), Volume Integral Method (VIM), Layer Approximation and other analytic methods. Each of these methods have advantages and limitations. An extension of the Layer Approximation to model eddy current probe responses to ferromagnetic materials will also be presented. Finally examples will be discussed demonstrating how some significant eddy current signal analysis problems have been resolved using appropriate electromagnetic computer modeling tools

  16. Semantic Interaction for Sensemaking: Inferring Analytical Reasoning for Model Steering.

    Science.gov (United States)

    Endert, A; Fiaux, P; North, C

    2012-12-01

    Visual analytic tools aim to support the cognitively demanding task of sensemaking. Their success often depends on the ability to leverage capabilities of mathematical models, visualization, and human intuition through flexible, usable, and expressive interactions. Spatially clustering data is one effective metaphor for users to explore similarity and relationships between information, adjusting the weighting of dimensions or characteristics of the dataset to observe the change in the spatial layout. Semantic interaction is an approach to user interaction in such spatializations that couples these parametric modifications of the clustering model with users' analytic operations on the data (e.g., direct document movement in the spatialization, highlighting text, search, etc.). In this paper, we present results of a user study exploring the ability of semantic interaction in a visual analytic prototype, ForceSPIRE, to support sensemaking. We found that semantic interaction captures the analytical reasoning of the user through keyword weighting, and aids the user in co-creating a spatialization based on the user's reasoning and intuition.

  17. Influence from cavity decay on geometric quantum computation in the large-detuning cavity QED model

    International Nuclear Information System (INIS)

    Chen Changyong; Zhang Xiaolong; Deng Zhijiao; Gao Kelin; Feng Mang

    2006-01-01

    We introduce a general displacement operator to investigate the unconventional geometric quantum computation with dissipation under the model of many identical three-level atoms in a cavity, driven by a classical field. Our concrete calculation is made for the case of two atoms, based on a previous scheme [S.-B. Zheng, Phys. Rev. A 70, 052320 (2004)] for the large-detuning interaction of the atoms with the cavity mode. The analytical results we present will be helpful for experimental realization of geometric quantum computation in real cavities

  18. Two analytical models for evaluating performance of Gigabit Ethernet Hosts

    International Nuclear Information System (INIS)

    Salah, K.

    2006-01-01

    Two analytical models are developed to study the impact of interrupt overhead on operating system performance of network hosts when subjected to Gigabit network traffic. Under heavy network traffic, the system performance will be negatively affected due to interrupt overhead caused by incoming traffic. In particular, excessive latency and significant degradation in system throughput can be experienced. Also user application may livelock as the CPU power is mostly consumed by interrupt handling and protocol processing. In this paper we present and compare two analytical models that capture host behavior and evaluate its performance. The first model is based Markov processes and queuing theory, while the second, which is more accurate but more complex is a pure Markov process. For the most part both models give mathematically-equivalent closed-form solutions for a number of important system performance metrics. These metrics include throughput, latency and stability condition, CPU utilization of interrupt handling and protocol processing and CPU availability for user applications. The analysis yields insight into understanding and predicting the impact of system and network choices on the performance of interrupt-driven systems when subjected to light and heavy network loads. More, importantly, our analytical work can also be valuable in improving host performance. The paper gives guidelines and recommendations to address design and implementation issues. Simulation and reported experimental results show that our analytical models are valid and give a good approximation. (author)

  19. Analytical model spectrum for electrostatic turbulence in tokamaks

    International Nuclear Information System (INIS)

    Fiedler-Ferrari, N.; Misguich, J.H.

    1990-04-01

    In this work we present an analytical model spectrum, for three-dimensional electrostatic turbulence (homogeneous, stationary and locally isotropic in the plane perpendicular to the magnetic field), constructed by using experimental results from TFR and TEXT Tokamaks, and satisfying basic symmetry and parity conditions. The proposed spectrum seems to be tractable for explicit analytical calculations of transport processes, and consistent with experimental data. Additional experimental measurements in the bulk plasma remain however necessary in order to determine some unknown spectral properties of parallel propagation

  20. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  1. Building analytical three-field cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Santos, J.R.L. [Universidade de Federal de Campina Grande, Unidade Academica de Fisica, Campina Grande, PB (Brazil); Moraes, P.H.R.S. [ITA-Instituto Tecnologico de Aeronautica, Sao Jose dos Campos, SP (Brazil); Ferreira, D.A. [Universidade de Federal de Campina Grande, Unidade Academica de Fisica, Campina Grande, PB (Brazil); Universidade Federal da Paraiba, Departamento de Fisica, Joao Pessoa, PB (Brazil); Neta, D.C.V. [Universidade de Federal de Campina Grande, Unidade Academica de Fisica, Campina Grande, PB (Brazil); Universidade Estadual da Paraiba, Departamento de Fisica, Campina Grande, PB (Brazil)

    2018-02-15

    A difficult task to deal with is the analytical treatment of models composed of three real scalar fields, as their equations of motion are in general coupled and hard to integrate. In order to overcome this problem we introduce a methodology to construct three-field models based on the so-called ''extension method''. The fundamental idea of the procedure is to combine three one-field systems in a non-trivial way, to construct an effective three scalar field model. An interesting scenario where the method can be implemented is with inflationary models, where the Einstein-Hilbert Lagrangian is coupled with the scalar field Lagrangian. We exemplify how a new model constructed from our method can lead to non-trivial behaviors for cosmological parameters. (orig.)

  2. Communication Theoretic Data Analytics

    OpenAIRE

    Chen, Kwang-Cheng; Huang, Shao-Lun; Zheng, Lizhong; Poor, H. Vincent

    2015-01-01

    Widespread use of the Internet and social networks invokes the generation of big data, which is proving to be useful in a number of applications. To deal with explosively growing amounts of data, data analytics has emerged as a critical technology related to computing, signal processing, and information networking. In this paper, a formalism is considered in which data is modeled as a generalized social network and communication theory and information theory are thereby extended to data analy...

  3. MODEL ANALYTICAL NETWORK PROCESS (ANP DALAM PENGEMBANGAN PARIWISATA DI JEMBER

    Directory of Open Access Journals (Sweden)

    Sukidin Sukidin

    2015-04-01

    Full Text Available Abstrak    : Model Analytical Network Process (ANP dalam Pengembangan Pariwisata di Jember. Penelitian ini mengkaji kebijakan pengembangan pariwisata di Jember, terutama kebijakan pengembangan agrowisata perkebunan kopi dengan menggunakan Jember Fashion Carnival (JFC sebagai event marketing. Metode yang digunakan adalah soft system methodology dengan menggunakan metode analitis jaringan (Analytical Network Process. Penelitian ini menemukan bahwa pengembangan pariwisata di Jember masih dilakukan dengan menggunakan pendekatan konvensional, belum terkoordinasi dengan baik, dan lebih mengandalkan satu even (atraksi pariwisata, yakni JFC, sebagai lokomotif daya tarik pariwisata Jember. Model pengembangan konvensional ini perlu dirancang kembali untuk memperoleh pariwisata Jember yang berkesinambungan. Kata kunci: pergeseran paradigma, industry pariwisata, even pariwisata, agrowisata Abstract: Analytical Network Process (ANP Model in the Tourism Development in Jember. The purpose of this study is to conduct a review of the policy of tourism development in Jember, especially development policies for coffee plantation agro-tourism by using Jember Fashion Carnival (JFC as event marketing. The research method used is soft system methodology using Analytical Network Process. The result shows that the tourism development in Jember is done using a conventional approach, lack of coordination, and merely focus on a single event tourism, i.e. the JFC, as locomotive tourism attraction in Jember. This conventional development model needs to be redesigned to reach Jember sustainable tourism development. Keywords: paradigm shift, tourism industry, agro-tourism

  4. Random-Effects Models for Meta-Analytic Structural Equation Modeling: Review, Issues, and Illustrations

    Science.gov (United States)

    Cheung, Mike W.-L.; Cheung, Shu Fai

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…

  5. Analytical model for nonlinear piezoelectric energy harvesting devices

    International Nuclear Information System (INIS)

    Neiss, S; Goldschmidtboeing, F; M Kroener; Woias, P

    2014-01-01

    In this work we propose analytical expressions for the jump-up and jump-down point of a nonlinear piezoelectric energy harvester. In addition, analytical expressions for the maximum power output at optimal resistive load and the 3 dB-bandwidth are derived. So far, only numerical models have been used to describe the physics of a piezoelectric energy harvester. However, this approach is not suitable to quickly evaluate different geometrical designs or piezoelectric materials in the harvester design process. In addition, the analytical expressions could be used to predict the jump-frequencies of a harvester during operation. In combination with a tuning mechanism, this would allow the design of an efficient control algorithm to ensure that the harvester is always working on the oscillator's high energy attractor. (paper)

  6. A semi-analytical solution to accelerate spin-up of a coupled carbon and nitrogen land model to steady state

    Directory of Open Access Journals (Sweden)

    J. Y. Xia

    2012-10-01

    Full Text Available The spin-up of land models to steady state of coupled carbon–nitrogen processes is computationally so costly that it becomes a bottleneck issue for global analysis. In this study, we introduced a semi-analytical solution (SAS for the spin-up issue. SAS is fundamentally based on the analytic solution to a set of equations that describe carbon transfers within ecosystems over time. SAS is implemented by three steps: (1 having an initial spin-up with prior pool-size values until net primary productivity (NPP reaches stabilization, (2 calculating quasi-steady-state pool sizes by letting fluxes of the equations equal zero, and (3 having a final spin-up to meet the criterion of steady state. Step 2 is enabled by averaged time-varying variables over one period of repeated driving forcings. SAS was applied to both site-level and global scale spin-up of the Australian Community Atmosphere Biosphere Land Exchange (CABLE model. For the carbon-cycle-only simulations, SAS saved 95.7% and 92.4% of computational time for site-level and global spin-up, respectively, in comparison with the traditional method (a long-term iterative simulation to achieve the steady states of variables. For the carbon–nitrogen coupled simulations, SAS reduced computational cost by 84.5% and 86.6% for site-level and global spin-up, respectively. The estimated steady-state pool sizes represent the ecosystem carbon storage capacity, which was 12.1 kg C m−2 with the coupled carbon–nitrogen global model, 14.6% lower than that with the carbon-only model. The nitrogen down-regulation in modeled carbon storage is partly due to the 4.6% decrease in carbon influx (i.e., net primary productivity and partly due to the 10.5% reduction in residence times. This steady-state analysis accelerated by the SAS method can facilitate comparative studies of structural differences in determining the ecosystem carbon storage capacity among biogeochemical models. Overall, the

  7. Model-based Engineering for the Integration of Manufacturing Systems with Advanced Analytics

    OpenAIRE

    Lechevalier , David; Narayanan , Anantha; Rachuri , Sudarsan; Foufou , Sebti; Lee , Y Tina

    2016-01-01

    Part 3: Interoperability and Systems Integration; International audience; To employ data analytics effectively and efficiently on manufacturing systems, engineers and data scientists need to collaborate closely to bring their domain knowledge together. In this paper, we introduce a domain-specific modeling approach to integrate a manufacturing system model with advanced analytics, in particular neural networks, to model predictions. Our approach combines a set of meta-models and transformatio...

  8. Many-core graph analytics using accelerated sparse linear algebra routines

    Science.gov (United States)

    Kozacik, Stephen; Paolini, Aaron L.; Fox, Paul; Kelmelis, Eric

    2016-05-01

    Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.

  9. Comparison of FDTD numerical computations and analytical multipole expansion method for plasmonics-active nanosphere dimers.

    Science.gov (United States)

    Dhawan, Anuj; Norton, Stephen J; Gerhold, Michael D; Vo-Dinh, Tuan

    2009-06-08

    This paper describes a comparative study of finite-difference time-domain (FDTD) and analytical evaluations of electromagnetic fields in the vicinity of dimers of metallic nanospheres of plasmonics-active metals. The results of these two computational methods, to determine electromagnetic field enhancement in the region often referred to as "hot spots" between the two nanospheres forming the dimer, were compared and a strong correlation observed for gold dimers. The analytical evaluation involved the use of the spherical-harmonic addition theorem to relate the multipole expansion coefficients between the two nanospheres. In these evaluations, the spacing between two nanospheres forming the dimer was varied to obtain the effect of nanoparticle spacing on the electromagnetic fields in the regions between the nanostructures. Gold and silver were the metals investigated in our work as they exhibit substantial plasmon resonance properties in the ultraviolet, visible, and near-infrared spectral regimes. The results indicate excellent correlation between the two computational methods, especially for gold nanosphere dimers with only a 5-10% difference between the two methods. The effect of varying the diameters of the nanospheres forming the dimer, on the electromagnetic field enhancement, was also studied.

  10. Computer-Aided Modeling Framework

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    Models are playing important roles in design and analysis of chemicals based products and the processes that manufacture them. Computer-aided methods and tools have the potential to reduce the number of experiments, which can be expensive and time consuming, and there is a benefit of working...... development and application. The proposed work is a part of the project for development of methods and tools that will allow systematic generation, analysis and solution of models for various objectives. It will use the computer-aided modeling framework that is based on a modeling methodology, which combines....... In this contribution, the concept of template-based modeling is presented and application is highlighted for the specific case of catalytic membrane fixed bed models. The modeling template is integrated in a generic computer-aided modeling framework. Furthermore, modeling templates enable the idea of model reuse...

  11. Semi-analytical Model for Estimating Absorption Coefficients of Optically Active Constituents in Coastal Waters

    Science.gov (United States)

    Wang, D.; Cui, Y.

    2015-12-01

    The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model

  12. Computer Profiling Based Model for Investigation

    OpenAIRE

    Neeraj Choudhary; Nikhil Kumar Singh; Parmalik Singh

    2011-01-01

    Computer profiling is used for computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgments as to the probable usage and evidentiary value of a comp...

  13. Analytical and finite element modeling of grounding systems

    Energy Technology Data Exchange (ETDEWEB)

    Luz, Mauricio Valencia Ferreira da [University of Santa Catarina (UFSC), Florianopolis, SC (Brazil)], E-mail: mauricio@grucad.ufsc.br; Dular, Patrick [University of Liege (Belgium). Institut Montefiore], E-mail: Patrick.Dular@ulg.ac.be

    2007-07-01

    Grounding is the art of making an electrical connection to the earth. This paper deals with the analytical and finite element modeling of grounding systems. An electrokinetic formulation using a scalar potential can benefit from floating potentials to define global quantities such as electric voltages and currents. The application concerns a single vertical grounding with one, two and three-layer soil, where the superior extremity stays in the surface of the soil. This problem has been modeled using a 2D axi-symmetric electrokinetic formulation. The grounding resistance obtained by finite element method is compared with the analytical one for one-layer soil. With the results of this paper it is possible to show that finite element method is a powerful tool in the analysis of the grounding systems in low frequencies. (author)

  14. Analytical evaluation of the signal and noise propagation in x-ray differential phase-contrast computed tomography

    International Nuclear Information System (INIS)

    Raupach, Rainer; Flohr, Thomas G

    2011-01-01

    We analyze the signal and noise propagation of differential phase-contrast computed tomography (PCT) compared with conventional attenuation-based computed tomography (CT) from a theoretical point of view. This work focuses on grating-based differential phase-contrast imaging. A mathematical framework is derived that is able to analytically predict the relative performance of both imaging techniques in the sense of the relative contrast-to-noise ratio for the contrast of any two materials. Two fundamentally different properties of PCT compared with CT are identified. First, the noise power spectra show qualitatively different characteristics implying a resolution-dependent performance ratio. The break-even point is derived analytically as a function of system parameters such as geometry and visibility. A superior performance of PCT compared with CT can only be achieved at a sufficiently high spatial resolution. Second, due to periodicity of phase information which is non-ambiguous only in a bounded interval statistical phase wrapping can occur. This effect causes a collapse of information propagation for low signals which limits the applicability of phase-contrast imaging at low dose.

  15. SciDAC Visualization and Analytics Center for Enabling Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Joy, Kenneth I. [Univ. of California, Davis, CA (United States)

    2014-09-14

    This project focuses on leveraging scientific visualization and analytics software technology as an enabling technology for increasing scientific productivity and insight. Advances in computational technology have resulted in an "information big bang," which in turn has created a significant data understanding challenge. This challenge is widely acknowledged to be one of the primary bottlenecks in contemporary science. The vision for our Center is to respond directly to that challenge by adapting, extending, creating when necessary and deploying visualization and data understanding technologies for our science stakeholders. Using an organizational model as a Visualization and Analytics Center for Enabling Technologies (VACET), we are well positioned to be responsive to the needs of a diverse set of scientific stakeholders in a coordinated fashion using a range of visualization, mathematics, statistics, computer and computational science and data management technologies.

  16. Models of optical quantum computing

    Directory of Open Access Journals (Sweden)

    Krovi Hari

    2017-03-01

    Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  17. Corrosion-induced bond strength degradation in reinforced concrete-Analytical and empirical models

    International Nuclear Information System (INIS)

    Bhargava, Kapilesh; Ghosh, A.K.; Mori, Yasuhiro; Ramanujam, S.

    2007-01-01

    The present paper aims to investigate the relationship between the bond strength and the reinforcement corrosion in reinforced concrete (RC). Analytical and empirical models are proposed for the bond strength of corroded reinforcing bars. Analytical model proposed by Cairns.and Abdullah [Cairns, J., Abdullah, R.B., 1996. Bond strength of black and epoxy-coated reinforcement-a theoretical approach. ACI Mater. J. 93 (4), 362-369] for splitting bond failure and later modified by Coronelli [Coronelli, D. 2002. Corrosion cracking and bond strength modeling for corroded bars in reinforced concrete. ACI Struct. J. 99 (3), 267-276] to consider the corroded bars, has been adopted. Estimation of the various parameters in the earlier analytical model has been proposed by the present authors. These parameters include corrosion pressure due to expansive action of corrosion products, modeling of tensile behaviour of cracked concrete and adhesion and friction coefficient between the corroded bar and cracked concrete. Simple empirical models are also proposed to evaluate the reduction in bond strength as a function of reinforcement corrosion in RC specimens. These empirical models are proposed by considering a wide range of published experimental investigations related to the bond degradation in RC specimens due to reinforcement corrosion. It has been found that the proposed analytical and empirical bond models are capable of providing the estimates of predicted bond strength of corroded reinforcement that are in reasonably good agreement with the experimentally observed values and with those of the other reported published data on analytical and empirical predictions. An attempt has also been made to evaluate the flexural strength of RC beams with corroded reinforcement failing in bond. It has also been found that the analytical predictions for the flexural strength of RC beams based on the proposed bond degradation models are in agreement with those of the experimentally

  18. Anisotropic Multishell Analytical Modeling of an Intervertebral Disk Subjected to Axial Compression.

    Science.gov (United States)

    Demers, Sébastien; Nadeau, Sylvie; Bouzid, Abdel-Hakim

    2016-04-01

    Studies on intervertebral disk (IVD) response to various loads and postures are essential to understand disk's mechanical functions and to suggest preventive and corrective actions in the workplace. The experimental and finite-element (FE) approaches are well-suited for these studies, but validating their findings is difficult, partly due to the lack of alternative methods. Analytical modeling could allow methodological triangulation and help validation of FE models. This paper presents an analytical method based on thin-shell, beam-on-elastic-foundation and composite materials theories to evaluate the stresses in the anulus fibrosus (AF) of an axisymmetric disk composed of multiple thin lamellae. Large deformations of the soft tissues are accounted for using an iterative method and the anisotropic material properties are derived from a published biaxial experiment. The results are compared to those obtained by FE modeling. The results demonstrate the capability of the analytical model to evaluate the stresses at any location of the simplified AF. It also demonstrates that anisotropy reduces stresses in the lamellae. This novel model is a preliminary step in developing valuable analytical models of IVDs, and represents a distinctive groundwork that is able to sustain future refinements. This paper suggests important features that may be included to improve model realism.

  19. A simplified model for computing equation of state of argon plasma

    International Nuclear Information System (INIS)

    Wang Caixia; Tian Yangmeng

    2006-01-01

    The paper present a simplified new model of computing equation of state and ionization degree of Argon plasma, which based on Thomas-Fermi (TF) statistical model: the authors fitted the numerical results of the ionization potential calculated by Thomas-Fermi statistical model and gained the analytical function of the potential versus the degree of ionization, then calculated the ionization potential and the average degree of ionization for Argon versus temperature and density in local thermal equilibrium case at 10-1000 eV. The results calculated of this simplified model are basically in agreement with several sets of theory data and experimental data. This simplified model can be used to calculation of the equation of state of plasmas mixture and is expected to have a more wide use in the field of EML technology involving the strongly ionized plasmas. (authors)

  20. Bessel Fourier Orientation Reconstruction (BFOR): An Analytical Diffusion Propagator Reconstruction for Hybrid Diffusion Imaging and Computation of q-Space Indices

    Science.gov (United States)

    Hosseinbor, A. Pasha; Chung, Moo K.; Wu, Yu-Chien; Alexander, Andrew L.

    2012-01-01

    The ensemble average propagator (EAP) describes the 3D average diffusion process of water molecules, capturing both its radial and angular contents. The EAP can thus provide richer information about complex tissue microstructure properties than the orientation distribution function (ODF), an angular feature of the EAP. Recently, several analytical EAP reconstruction schemes for multiple q-shell acquisitions have been proposed, such as diffusion propagator imaging (DPI) and spherical polar Fourier imaging (SPFI). In this study, a new analytical EAP reconstruction method is proposed, called Bessel Fourier orientation reconstruction (BFOR), whose solution is based on heat equation estimation of the diffusion signal for each shell acquisition, and is validated on both synthetic and real datasets. A significant portion of the paper is dedicated to comparing BFOR, SPFI, and DPI using hybrid, non-Cartesian sampling for multiple b-value acquisitions. Ways to mitigate the effects of Gibbs ringing on EAP reconstruction are also explored. In addition to analytical EAP reconstruction, the aforementioned modeling bases can be used to obtain rotationally invariant q-space indices of potential clinical value, an avenue which has not yet been thoroughly explored. Three such measures are computed: zero-displacement probability (Po), mean squared displacement (MSD), and generalized fractional anisotropy (GFA). PMID:22963853

  1. ATLAS Analytics and Machine Learning Platforms

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Legger, Federica; Gardner, Robert

    2018-01-01

    In 2015 ATLAS Distributed Computing started to migrate its monitoring systems away from Oracle DB and decided to adopt new big data platforms that are open source, horizontally scalable, and offer the flexibility of NoSQL systems. Three years later, the full software stack is in place, the system is considered in production and operating at near maximum capacity (in terms of storage capacity and tightly coupled analysis capability). The new model provides several tools for fast and easy to deploy monitoring and accounting. The main advantages are: ample ways to do complex analytics studies (using technologies such as java, pig, spark, python, jupyter), flexibility in reorganization of data flows, near real time and inline processing. The analytics studies improve our understanding of different computing systems and their interplay, thus enabling whole-system debugging and optimization. In addition, the platform provides services to alarm or warn on anomalous conditions, and several services closing feedback l...

  2. Analytical model of the optical vortex microscope.

    Science.gov (United States)

    Płocinniczak, Łukasz; Popiołek-Masajada, Agnieszka; Masajada, Jan; Szatkowski, Mateusz

    2016-04-20

    This paper presents an analytical model of the optical vortex scanning microscope. In this microscope the Gaussian beam with an embedded optical vortex is focused into the sample plane. Additionally, the optical vortex can be moved inside the beam, which allows fine scanning of the sample. We provide an analytical solution of the whole path of the beam in the system (within paraxial approximation)-from the vortex lens to the observation plane situated on the CCD camera. The calculations are performed step by step from one optical element to the next. We show that at each step, the expression for light complex amplitude has the same form with only four coefficients modified. We also derive a simple expression for the vortex trajectory of small vortex displacements.

  3. Coupled thermodynamic-dynamic semi-analytical model of free piston Stirling engines

    Energy Technology Data Exchange (ETDEWEB)

    Formosa, F., E-mail: fabien.formosa@univ-savoie.f [Laboratoire SYMME, Universite de Savoie, BP 80439, 74944 Annecy le Vieux Cedex (France)

    2011-05-15

    Research highlights: {yields} The free piston Stirling behaviour relies on its thermal and dynamic features. {yields} A global semi-analytical model for preliminary design is developed. {yields} The model compared with NASA-RE1000 experimental data shows good correlations. -- Abstract: The study of free piston Stirling engine (FPSE) requires both accurate thermodynamic and dynamic modelling to predict its performances. The steady state behaviour of the engine partly relies on non linear dissipative phenomena such as pressure drop loss within heat exchangers which is dependant on the temperature within the associated components. An analytical thermodynamic model which encompasses the effectiveness and the flaws of the heat exchangers and the regenerator has been previously developed and validated. A semi-analytical dynamic model of FPSE is developed and presented in this paper. The thermodynamic model is used to define the thermal variables that are used in the dynamic model which evaluates the kinematic results. Thus, a coupled iterative strategy has been used to perform a global simulation. The global modelling approach has been validated using the experimental data available from the NASA RE-1000 Stirling engine prototype. The resulting coupled thermodynamic-dynamic model using a standardized description of the engine allows efficient and realistic preliminary design of FPSE.

  4. Coupled thermodynamic-dynamic semi-analytical model of free piston Stirling engines

    International Nuclear Information System (INIS)

    Formosa, F.

    2011-01-01

    Research highlights: → The free piston Stirling behaviour relies on its thermal and dynamic features. → A global semi-analytical model for preliminary design is developed. → The model compared with NASA-RE1000 experimental data shows good correlations. -- Abstract: The study of free piston Stirling engine (FPSE) requires both accurate thermodynamic and dynamic modelling to predict its performances. The steady state behaviour of the engine partly relies on non linear dissipative phenomena such as pressure drop loss within heat exchangers which is dependant on the temperature within the associated components. An analytical thermodynamic model which encompasses the effectiveness and the flaws of the heat exchangers and the regenerator has been previously developed and validated. A semi-analytical dynamic model of FPSE is developed and presented in this paper. The thermodynamic model is used to define the thermal variables that are used in the dynamic model which evaluates the kinematic results. Thus, a coupled iterative strategy has been used to perform a global simulation. The global modelling approach has been validated using the experimental data available from the NASA RE-1000 Stirling engine prototype. The resulting coupled thermodynamic-dynamic model using a standardized description of the engine allows efficient and realistic preliminary design of FPSE.

  5. Overhead Crane Computer Model

    Science.gov (United States)

    Enin, S. S.; Omelchenko, E. Y.; Fomin, N. V.; Beliy, A. V.

    2018-03-01

    The paper has a description of a computer model of an overhead crane system. The designed overhead crane system consists of hoisting, trolley and crane mechanisms as well as a payload two-axis system. With the help of the differential equation of specified mechanisms movement derived through Lagrange equation of the II kind, it is possible to build an overhead crane computer model. The computer model was obtained using Matlab software. Transients of coordinate, linear speed and motor torque of trolley and crane mechanism systems were simulated. In addition, transients of payload swaying were obtained with respect to the vertical axis. A trajectory of the trolley mechanism with simultaneous operation with the crane mechanism is represented in the paper as well as a two-axis trajectory of payload. The designed computer model of an overhead crane is a great means for studying positioning control and anti-sway control systems.

  6. Computer-Based Mathematics Instructions for Engineering Students

    Science.gov (United States)

    Khan, Mustaq A.; Wall, Curtiss E.

    1996-01-01

    Almost every engineering course involves mathematics in one form or another. The analytical process of developing mathematical models is very important for engineering students. However, the computational process involved in the solution of some mathematical problems may be very tedious and time consuming. There is a significant amount of mathematical software such as Mathematica, Mathcad, and Maple designed to aid in the solution of these instructional problems. The use of these packages in classroom teaching can greatly enhance understanding, and save time. Integration of computer technology in mathematics classes, without de-emphasizing the traditional analytical aspects of teaching, has proven very successful and is becoming almost essential. Sample computer laboratory modules are developed for presentation in the classroom setting. This is accomplished through the use of overhead projectors linked to graphing calculators and computers. Model problems are carefully selected from different areas.

  7. An analytical model of flagellate hydrodynamics

    DEFF Research Database (Denmark)

    Dölger, Julia; Bohr, Tomas; Andersen, Anders Peter

    2017-01-01

    solution by Oseen for the low Reynolds number flow due to a point force outside a no-slip sphere. The no-slip sphere represents the cell and the point force a single flagellum. By superposition we are able to model a freely swimming flagellate with several flagella. For biflagellates with left......–right symmetric flagellar arrangements we determine the swimming velocity, and we show that transversal forces due to the periodic movements of the flagella can promote swimming. For a model flagellate with both a longitudinal and a transversal flagellum we determine radius and pitch of the helical swimming......Flagellates are unicellular microswimmers that propel themselves using one or several beating flagella. We consider a hydrodynamic model of flagellates and explore the effect of flagellar arrangement and beat pattern on swimming kinematics and near-cell flow. The model is based on the analytical...

  8. Assessment of passive drag in swimming by numerical simulation and analytical procedure.

    Science.gov (United States)

    Barbosa, Tiago M; Ramos, Rui; Silva, António J; Marinho, Daniel A

    2018-03-01

    The aim was to compare the passive drag-gliding underwater by a numerical simulation and an analytical procedure. An Olympic swimmer was scanned by computer tomography and modelled gliding at a 0.75-m depth in the streamlined position. Steady-state computer fluid dynamics (CFD) analyses were performed on Fluent. A set of analytical procedures was selected concurrently. Friction drag (D f ), pressure drag (D pr ), total passive drag force (D f +pr ) and drag coefficient (C D ) were computed between 1.3 and 2.5 m · s -1 by both techniques. D f +pr ranged from 45.44 to 144.06 N with CFD, from 46.03 to 167.06 N with the analytical procedure (differences: from 1.28% to 13.77%). C D ranged between 0.698 and 0.622 by CFD, 0.657 and 0.644 by analytical procedures (differences: 0.40-6.30%). Linear regression models showed a very high association for D f +pr plotted in absolute values (R 2  = 0.98) and after log-log transformation (R 2  = 0.99). The C D also obtained a very high adjustment for both absolute (R 2  = 0.97) and log-log plots (R 2  = 0.97). The bias for the D f +pr was 8.37 N and 0.076 N after logarithmic transformation. D f represented between 15.97% and 18.82% of the D f +pr by the CFD, 14.66% and 16.21% by the analytical procedures. Therefore, despite the bias, analytical procedures offer a feasible way of gathering insight on one's hydrodynamics characteristics.

  9. Was Babbage's Analytical Engine intended to be a mechanical model of the mind?

    Science.gov (United States)

    Green, Christopher D

    2005-02-01

    In the 1830s, Charles Babbage worked on a mechanical computer he dubbed the Analytical Engine. Although some people around Babbage described his invention as though it had authentic mental powers, Babbage refrained from making such claims. He does not, however, seem to have discouraged those he worked with from mooting the idea publicly. This article investigates whether (1) the Analytical Engine was the focus of a covert research program into the mechanism of mentality; (2) Babbage opposed the idea that the Analytical Engine had mental powers but allowed his colleagues to speculate as they saw fit; or (3) Babbage believed such claims to be fanciful, but cleverly used the publicity they engendered to draw public and political attention to his project.

  10. LION: A dynamic computer model for the low-latitude ionosphere

    Directory of Open Access Journals (Sweden)

    J. A. Bittencourt

    2007-11-01

    Full Text Available A realistic fully time-dependent computer model, denominated LION (Low-latitude Ionospheric model, that simulates the dynamic behavior of the low-latitude ionosphere is presented. The time evolution and spatial distribution of the ionospheric particle densities and velocities are computed by numerically solving the time-dependent, coupled, nonlinear system of continuity and momentum equations for the ions O+, O2+, NO+, N2+ and N+, taking into account photoionization of the atmospheric species by the solar extreme ultraviolet radiation, chemical and ionic production and loss reactions, and plasma transport processes, including the ionospheric effects of thermospheric neutral winds, plasma diffusion and electromagnetic E×B plasma drifts. The Earth's magnetic field is represented by a tilted centered magnetic dipole. This set of coupled nonlinear equations is solved along a given magnetic field line in a Lagrangian frame of reference moving vertically, in the magnetic meridian plane, with the electromagnetic E×B plasma drift velocity. The spatial and time distribution of the thermospheric neutral wind velocities and the pattern of the electromagnetic drifts are taken as known quantities, given through specified analytical or empirical models. The model simulation results are presented in the form of computer-generated color maps and reproduce the typical ionization distribution and time evolution normally observed in the low-latitude ionosphere, including details of the equatorial Appleton anomaly dynamics. The specific effects on the ionosphere due to changes in the thermospheric neutral winds and the electromagnetic plasma drifts can be investigated using different wind and drift models, including the important longitudinal effects associated with magnetic declination dependence and latitudinal separation between geographic and

  11. LION: A dynamic computer model for the low-latitude ionosphere

    Directory of Open Access Journals (Sweden)

    J. A. Bittencourt

    2007-11-01

    Full Text Available A realistic fully time-dependent computer model, denominated LION (Low-latitude Ionospheric model, that simulates the dynamic behavior of the low-latitude ionosphere is presented. The time evolution and spatial distribution of the ionospheric particle densities and velocities are computed by numerically solving the time-dependent, coupled, nonlinear system of continuity and momentum equations for the ions O+, O2+, NO+, N2+ and N+, taking into account photoionization of the atmospheric species by the solar extreme ultraviolet radiation, chemical and ionic production and loss reactions, and plasma transport processes, including the ionospheric effects of thermospheric neutral winds, plasma diffusion and electromagnetic E×B plasma drifts. The Earth's magnetic field is represented by a tilted centered magnetic dipole. This set of coupled nonlinear equations is solved along a given magnetic field line in a Lagrangian frame of reference moving vertically, in the magnetic meridian plane, with the electromagnetic E×B plasma drift velocity. The spatial and time distribution of the thermospheric neutral wind velocities and the pattern of the electromagnetic drifts are taken as known quantities, given through specified analytical or empirical models. The model simulation results are presented in the form of computer-generated color maps and reproduce the typical ionization distribution and time evolution normally observed in the low-latitude ionosphere, including details of the equatorial Appleton anomaly dynamics. The specific effects on the ionosphere due to changes in the thermospheric neutral winds and the electromagnetic plasma drifts can be investigated using different wind and drift models, including the important longitudinal effects associated with magnetic declination dependence and latitudinal separation between geographic and geomagnetic equators. The model runs in a normal personal computer (PC and generates color maps illustrating the

  12. Numerical Analysis of Multiscale Computations

    CERN Document Server

    Engquist, Björn; Tsai, Yen-Hsi R

    2012-01-01

    This book is a snapshot of current research in multiscale modeling, computations and applications. It covers fundamental mathematical theory, numerical algorithms as well as practical computational advice for analysing single and multiphysics models containing a variety of scales in time and space. Complex fluids, porous media flow and oscillatory dynamical systems are treated in some extra depth, as well as tools like analytical and numerical homogenization, and fast multipole method.

  13. SPARTex: A Vertex-Centric Framework for RDF Data Analytics

    KAUST Repository

    Abdelaziz, Ibrahim

    2015-08-31

    A growing number of applications require combining SPARQL queries with generic graph search on RDF data. However, the lack of procedural capabilities in SPARQL makes it inappropriate for graph analytics. Moreover, RDF engines focus on SPARQL query evaluation whereas graph management frameworks perform only generic graph computations. In this work, we bridge the gap by introducing SPARTex, an RDF analytics framework based on the vertex-centric computation model. In SPARTex, user-defined vertex centric programs can be invoked from SPARQL as stored procedures. SPARTex allows the execution of a pipeline of graph algorithms without the need for multiple reads/writes of input data and intermediate results. We use a cost-based optimizer for minimizing the communication cost. SPARTex evaluates queries that combine SPARQL and generic graph computations orders of magnitude faster than existing RDF engines. We demonstrate a real system prototype of SPARTex running on a local cluster using real and synthetic datasets. SPARTex has a real-time graphical user interface that allows the participants to write regular SPARQL queries, use our proposed SPARQL extension to declaratively invoke graph algorithms or combine/pipeline both SPARQL querying and generic graph analytics.

  14. Investigation of a Markov Model for Computer System Security Threats

    Directory of Open Access Journals (Sweden)

    Alexey A. A. Magazev

    2017-01-01

    Full Text Available In this work, a model for computer system security threats formulated in terms of Markov processes is investigated. In the framework of this model the functioning of the computer system is considered as a sequence of failures and recovery actions which appear as results of information security threats acting on the system. We provide a detailed description of the model: the explicit analytical formulas for the probabilities of computer system states at any arbitrary moment of time are derived, some limiting cases are discussed, and the long-run dynamics of the system is analysed. The dependence of the security state probability (i.e. the state for which threats are absent on the probabilities of threats is separately investigated. In particular, it is shown that this dependence is qualitatively different for odd and even moments of time. For instance, in the case of one threat the security state probability demonstrates non-monotonic dependence on the probability of threat at even moments of time; this function admits at least one local minimum in its domain of definition. It is believed that the mentioned feature is important because it allows to locate the most dangerous areas of threats where the security state probability can be lower then the permissible level. Finally, we introduce an important characteristic of the model, called the relaxation time, by means of which we construct the permitting domain of the security parameters. Also the prospects of the received results application to the problem of finding the optimal values of the security parameters is discussed.

  15. Analytic solution of the Starobinsky model for inflation

    Energy Technology Data Exchange (ETDEWEB)

    Paliathanasis, Andronikos [Universidad Austral de Chile, Instituto de Ciencias Fisicas y Matematicas, Valdivia (Chile); Durban University of Technology, Institute of Systems Science, Durban (South Africa)

    2017-07-15

    We prove that the field equations of the Starobinsky model for inflation in a Friedmann-Lemaitre-Robertson-Walker metric constitute an integrable system. The analytical solution in terms of a Painleve series for the Starobinsky model is presented for the case of zero and nonzero spatial curvature. In both cases the leading-order term describes the radiation era provided by the corresponding higher-order theory. (orig.)

  16. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  17. Computing derivative-based global sensitivity measures using polynomial chaos expansions

    International Nuclear Information System (INIS)

    Sudret, B.; Mai, C.V.

    2015-01-01

    In the field of computer experiments sensitivity analysis aims at quantifying the relative importance of each input parameter (or combinations thereof) of a computational model with respect to the model output uncertainty. Variance decomposition methods leading to the well-known Sobol' indices are recognized as accurate techniques, at a rather high computational cost though. The use of polynomial chaos expansions (PCE) to compute Sobol' indices has allowed to alleviate the computational burden though. However, when dealing with large dimensional input vectors, it is good practice to first use screening methods in order to discard unimportant variables. The derivative-based global sensitivity measures (DGSMs) have been developed recently in this respect. In this paper we show how polynomial chaos expansions may be used to compute analytically DGSMs as a mere post-processing. This requires the analytical derivation of derivatives of the orthonormal polynomials which enter PC expansions. Closed-form expressions for Hermite, Legendre and Laguerre polynomial expansions are given. The efficiency of the approach is illustrated on two well-known benchmark problems in sensitivity analysis. - Highlights: • Derivative-based global sensitivity measures (DGSM) have been developed for screening purpose. • Polynomial chaos expansions (PC) are used as a surrogate model of the original computational model. • From a PC expansion the DGSM can be computed analytically. • The paper provides the derivatives of Hermite, Legendre and Laguerre polynomials for this purpose

  18. Experimental evaluation of analytical penumbra calculation model for wobbled beams

    International Nuclear Information System (INIS)

    Kohno, Ryosuke; Kanematsu, Nobuyuki; Yusa, Ken; Kanai, Tatsuaki

    2004-01-01

    The goal of radiotherapy is not only to apply a high radiation dose to a tumor, but also to avoid side effects in the surrounding healthy tissue. Therefore, it is important for carbon-ion treatment planning to calculate accurately the effects of the lateral penumbra. In this article, for wobbled beams under various irradiation conditions, we focus on the lateral penumbras at several aperture positions of one side leaf of the multileaf collimator. The penumbras predicted by an analytical penumbra calculation model were compared with the measured results. The results calculated by the model for various conditions agreed well with the experimental ones. In conclusion, we found that the analytical penumbra calculation model could predict accurately the measured results for wobbled beams and it was useful for carbon-ion treatment planning to apply the model

  19. Fast analytical model of MZI micro-opto-mechanical pressure sensor

    Science.gov (United States)

    Rochus, V.; Jansen, R.; Goyvaerts, J.; Neutens, P.; O’Callaghan, J.; Rottenberg, X.

    2018-06-01

    This paper presents a fast analytical procedure in order to design a micro-opto-mechanical pressure sensor (MOMPS) taking into account the mechanical nonlinearity and the optical losses. A realistic model of the photonic MZI is proposed, strongly coupled to a nonlinear mechanical model of the membrane. Based on the membrane dimensions, the residual stress, the position of the waveguide, the optical wavelength and the phase variation due to the opto-mechanical coupling, we derive an analytical model which allows us to predict the response of the total system. The effect of the nonlinearity and the losses on the total performance are carefully studied and measurements on fabricated devices are used to validate the model. Finally, a design procedure is proposed in order to realize fast design of this new type of pressure sensor.

  20. Essential Means for Urban Computing: Specification of Web-Based Computing Platforms for Urban Planning, a Hitchhiker’s Guide

    Directory of Open Access Journals (Sweden)

    Pirouz Nourian

    2018-03-01

    Full Text Available This article provides an overview of the specifications of web-based computing platforms for urban data analytics and computational urban planning practice. There are currently a variety of tools and platforms that can be used in urban computing practices, including scientific computing languages, interactive web languages, data sharing platforms and still many desktop computing environments, e.g., GIS software applications. We have reviewed a list of technologies considering their potential and applicability in urban planning and urban data analytics. This review is not only based on the technical factors such as capabilities of the programming languages but also the ease of developing and sharing complex data processing workflows. The arena of web-based computing platforms is currently under rapid development and is too volatile to be predictable; therefore, in this article we focus on the specification of the requirements and potentials from an urban planning point of view rather than speculating about the fate of computing platforms or programming languages. The article presents a list of promising computing technologies, a technical specification of the essential data models and operators for geo-spatial data processing, and mathematical models for an ideal urban computing platform.

  1. Model Reduction of Computational Aerothermodynamics for Multi-Discipline Analysis in High Speed Flows

    Science.gov (United States)

    Crowell, Andrew Rippetoe

    This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two

  2. Evaluating Modeling Sessions Using the Analytic Hierarchy Process

    NARCIS (Netherlands)

    Ssebuggwawo, D.; Hoppenbrouwers, S.J.B.A.; Proper, H.A.; Persson, A.; Stirna, J.

    2008-01-01

    In this paper, which is methodological in nature, we propose to use an established method from the field of Operations Research, the Analytic Hierarchy Process (AHP), in the integrated, stakeholder- oriented evaluation of enterprise modeling sessions: their language, pro- cess, tool (medium), and

  3. Collaborative data analytics for smart buildings: opportunities and models

    DEFF Research Database (Denmark)

    Lazarova-Molnar, Sanja; Mohamed, Nader

    2018-01-01

    of collaborative data analytics for smart buildings, its benefits, as well as presently possible models of carrying it out. Furthermore, we present a framework for collaborative fault detection and diagnosis as a case of collaborative data analytics for smart buildings. We also provide a preliminary analysis...... of the energy efficiency benefit of such collaborative framework for smart buildings. The result shows that significant energy savings can be achieved for smart buildings using collaborative data analytics.......Smart buildings equipped with state-of-the-art sensors and meters are becoming more common. Large quantities of data are being collected by these devices. For a single building to benefit from its own collected data, it will need to wait for a long time to collect sufficient data to build accurate...

  4. PORFLO - a continuum model for fluid flow, heat transfer, and mass transport in porous media. Model theory, numerical methods, and computational tests

    International Nuclear Information System (INIS)

    Runchal, A.K.; Sagar, B.; Baca, R.G.; Kline, N.W.

    1985-09-01

    Postclosure performance assessment of the proposed high-level nuclear waste repository in flood basalts at Hanford requires that the processes of fluid flow, heat transfer, and mass transport be numerically modeled at appropriate space and time scales. A suite of computer models has been developed to meet this objective. The theory of one of these models, named PORFLO, is described in this report. Also presented are a discussion of the numerical techniques in the PORFLO computer code and a few computational test cases. Three two-dimensional equations, one each for fluid flow, heat transfer, and mass transport, are numerically solved in PORFLO. The governing equations are derived from the principle of conservation of mass, momentum, and energy in a stationary control volume that is assumed to contain a heterogeneous, anisotropic porous medium. Broad discrete features can be accommodated by specifying zones with distinct properties, or these can be included by defining an equivalent porous medium. The governing equations are parabolic differential equations that are coupled through time-varying parameters. Computational tests of the model are done by comparisons of simulation results with analytic solutions, with results from other independently developed numerical models, and with available laboratory and/or field data. In this report, in addition to the theory of the model, results from three test cases are discussed. A users' manual for the computer code resulting from this model has been prepared and is available as a separate document. 37 refs., 20 figs., 15 tabs

  5. Analytic model of heat deposition in spallation neutron target

    International Nuclear Information System (INIS)

    Findlay, D.J.S.

    2015-01-01

    A simple analytic model for estimating deposition of heat in a spallation neutron target is presented—a model that can readily be realised in an unambitious spreadsheet. The model is based on simple representations of the principal underlying physical processes, and is intended largely as a ‘sanity check’ on results from Monte Carlo codes such as FLUKA or MCNPX.

  6. Analytic model of heat deposition in spallation neutron target

    Energy Technology Data Exchange (ETDEWEB)

    Findlay, D.J.S.

    2015-12-11

    A simple analytic model for estimating deposition of heat in a spallation neutron target is presented—a model that can readily be realised in an unambitious spreadsheet. The model is based on simple representations of the principal underlying physical processes, and is intended largely as a ‘sanity check’ on results from Monte Carlo codes such as FLUKA or MCNPX.

  7. Assessment of Analytic Water hammer Pressure Model of FAI/08-70

    International Nuclear Information System (INIS)

    Park, Ju Yeop; Yoo, Seung Hun; Seul, Kwang-Won

    2016-01-01

    In evaluating water hammer effect on the safety related systems, methods developed by the US utility are likely to be adopted in Korea. For example, the US utility developed specific methods to evaluate pressure and loading transient on piping due to water hammer as in FAI/08-70. The methods of FAI/08-70 would be applied in Korea when any regulatory request on the evaluation of water hammer effect due to the non-condensable gas accumulation in the safety related systems. Specifically, FAI/08-70 gives an analytic model which can be used to analyze the maximum transient pressure and maximum transient loading on the piping of the safety-related systems due to the non-condensable induced water hammer effect. Therefore, it seems to be meaningful to review the FAI/08-70 methods and attempt to apply the methods to a specific case to see if they really give reasonable estimate before the application of FAI/08-70 methods to domestic nuclear power plants. In the present study, analytic water hammer pressure model of FAI/08-70 is reviewed in detail and the model is applied to the specific experiment of FAI/08-70 to see if the analytic water hammer pressure model really gives reasonable estimate of the peak water hammer pressure. Specifically, we assess the experiment 52A of FAI/08-70 which adopts flushed initial condition with a short rising piping length and a high level piping length of 51inch. The calculated analytic water hammer pressure peak shows a close agreement with the measured experimental data of 52A. Unfortunately, however, the theoretical value is a little bit less than that of the experimental value. This implies the analytic model of FAI/08-70 is not conservative

  8. Assessment of Analytic Water hammer Pressure Model of FAI/08-70

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ju Yeop; Yoo, Seung Hun; Seul, Kwang-Won [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2016-10-15

    In evaluating water hammer effect on the safety related systems, methods developed by the US utility are likely to be adopted in Korea. For example, the US utility developed specific methods to evaluate pressure and loading transient on piping due to water hammer as in FAI/08-70. The methods of FAI/08-70 would be applied in Korea when any regulatory request on the evaluation of water hammer effect due to the non-condensable gas accumulation in the safety related systems. Specifically, FAI/08-70 gives an analytic model which can be used to analyze the maximum transient pressure and maximum transient loading on the piping of the safety-related systems due to the non-condensable induced water hammer effect. Therefore, it seems to be meaningful to review the FAI/08-70 methods and attempt to apply the methods to a specific case to see if they really give reasonable estimate before the application of FAI/08-70 methods to domestic nuclear power plants. In the present study, analytic water hammer pressure model of FAI/08-70 is reviewed in detail and the model is applied to the specific experiment of FAI/08-70 to see if the analytic water hammer pressure model really gives reasonable estimate of the peak water hammer pressure. Specifically, we assess the experiment 52A of FAI/08-70 which adopts flushed initial condition with a short rising piping length and a high level piping length of 51inch. The calculated analytic water hammer pressure peak shows a close agreement with the measured experimental data of 52A. Unfortunately, however, the theoretical value is a little bit less than that of the experimental value. This implies the analytic model of FAI/08-70 is not conservative.

  9. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  10. Computationally Modeling Interpersonal Trust

    Directory of Open Access Journals (Sweden)

    Jin Joo eLee

    2013-12-01

    Full Text Available We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind’s readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naivete' of this domain knowledge. We then present the construction of hidden Markov models to incorporate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.

  11. MAGNETO-FRICTIONAL MODELING OF CORONAL NONLINEAR FORCE-FREE FIELDS. I. TESTING WITH ANALYTIC SOLUTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y.; Keppens, R. [School of Astronomy and Space Science, Nanjing University, Nanjing 210023 (China); Xia, C. [Centre for mathematical Plasma-Astrophysics, Department of Mathematics, KU Leuven, B-3001 Leuven (Belgium); Valori, G., E-mail: guoyang@nju.edu.cn [University College London, Mullard Space Science Laboratory, Holmbury St. Mary, Dorking, Surrey RH5 6NT (United Kingdom)

    2016-09-10

    We report our implementation of the magneto-frictional method in the Message Passing Interface Adaptive Mesh Refinement Versatile Advection Code (MPI-AMRVAC). The method aims at applications where local adaptive mesh refinement (AMR) is essential to make follow-up dynamical modeling affordable. We quantify its performance in both domain-decomposed uniform grids and block-adaptive AMR computations, using all frequently employed force-free, divergence-free, and other vector comparison metrics. As test cases, we revisit the semi-analytic solution of Low and Lou in both Cartesian and spherical geometries, along with the topologically challenging Titov–Démoulin model. We compare different combinations of spatial and temporal discretizations, and find that the fourth-order central difference with a local Lax–Friedrichs dissipation term in a single-step marching scheme is an optimal combination. The initial condition is provided by the potential field, which is the potential field source surface model in spherical geometry. Various boundary conditions are adopted, ranging from fully prescribed cases where all boundaries are assigned with the semi-analytic models, to solar-like cases where only the magnetic field at the bottom is known. Our results demonstrate that all the metrics compare favorably to previous works in both Cartesian and spherical coordinates. Cases with several AMR levels perform in accordance with their effective resolutions. The magneto-frictional method in MPI-AMRVAC allows us to model a region of interest with high spatial resolution and large field of view simultaneously, as required by observation-constrained extrapolations using vector data provided with modern instruments. The applications of the magneto-frictional method to observations are shown in an accompanying paper.

  12. Computational Modeling of Space Physiology

    Science.gov (United States)

    Lewandowski, Beth E.; Griffin, Devon W.

    2016-01-01

    The Digital Astronaut Project (DAP), within NASAs Human Research Program, develops and implements computational modeling for use in the mitigation of human health and performance risks associated with long duration spaceflight. Over the past decade, DAP developed models to provide insights into space flight related changes to the central nervous system, cardiovascular system and the musculoskeletal system. Examples of the models and their applications include biomechanical models applied to advanced exercise device development, bone fracture risk quantification for mission planning, accident investigation, bone health standards development, and occupant protection. The International Space Station (ISS), in its role as a testing ground for long duration spaceflight, has been an important platform for obtaining human spaceflight data. DAP has used preflight, in-flight and post-flight data from short and long duration astronauts for computational model development and validation. Examples include preflight and post-flight bone mineral density data, muscle cross-sectional area, and muscle strength measurements. Results from computational modeling supplement space physiology research by informing experimental design. Using these computational models, DAP personnel can easily identify both important factors associated with a phenomenon and areas where data are lacking. This presentation will provide examples of DAP computational models, the data used in model development and validation, and applications of the model.

  13. AN ANALYTIC MODEL OF DUSTY, STRATIFIED, SPHERICAL H ii REGIONS

    Energy Technology Data Exchange (ETDEWEB)

    Rodríguez-Ramírez, J. C.; Raga, A. C. [Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, Ap. 70-543, 04510 D.F., México (Mexico); Lora, V. [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität, Mönchhofstr. 12-14, D-69120 Heidelberg (Germany); Cantó, J., E-mail: juan.rodriguez@nucleares.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, Ap. 70-468, 04510 D. F., México (Mexico)

    2016-12-20

    We study analytically the effect of radiation pressure (associated with photoionization processes and with dust absorption) on spherical, hydrostatic H ii regions. We consider two basic equations, one for the hydrostatic balance between the radiation-pressure components and the gas pressure, and another for the balance among the recombination rate, the dust absorption, and the ionizing photon rate. Based on appropriate mathematical approximations, we find a simple analytic solution for the density stratification of the nebula, which is defined by specifying the radius of the external boundary, the cross section of dust absorption, and the luminosity of the central star. We compare the analytic solution with numerical integrations of the model equations of Draine, and find a wide range of the physical parameters for which the analytic solution is accurate.

  14. Analytical and empirical mathematics with computers

    International Nuclear Information System (INIS)

    Wolfram, S.

    1986-01-01

    In this presentation, some of the practical methodological and theoretical implications of computation for the mathematical sciences are discussed. Computers are becoming an increasingly significant tool for research in the mathematical sciences. This paper discusses some of the fundamental ways in which computers have and can be used to do mathematics

  15. An analytical model for interactive failures

    International Nuclear Information System (INIS)

    Sun Yong; Ma Lin; Mathew, Joseph; Zhang Sheng

    2006-01-01

    In some systems, failures of certain components can interact with each other, and accelerate the failure rates of these components. These failures are defined as interactive failure. Interactive failure is a prevalent cause of failure associated with complex systems, particularly in mechanical systems. The failure risk of an asset will be underestimated if the interactive effect is ignored. When failure risk is assessed, interactive failures of an asset need to be considered. However, the literature is silent on previous research work in this field. This paper introduces the concepts of interactive failure, develops an analytical model to analyse this type of failure quantitatively, and verifies the model using case studies and experiments

  16. An Analytic Approach to Developing Transport Threshold Models of Neoclassical Tearing Modes in Tokamaks

    International Nuclear Information System (INIS)

    Mikhailovskii, A.B.; Shirokov, M.S.; Konovalov, S.V.; Tsypin, V.S.

    2005-01-01

    Transport threshold models of neoclassical tearing modes in tokamaks are investigated analytically. An analysis is made of the competition between strong transverse heat transport, on the one hand, and longitudinal heat transport, longitudinal heat convection, longitudinal inertial transport, and rotational transport, on the other hand, which leads to the establishment of the perturbed temperature profile in magnetic islands. It is shown that, in all these cases, the temperature profile can be found analytically by using rigorous solutions to the heat conduction equation in the near and far regions of a chain of magnetic islands and then by matching these solutions. Analytic expressions for the temperature profile are used to calculate the contribution of the bootstrap current to the generalized Rutherford equation for the island width evolution with the aim of constructing particular transport threshold models of neoclassical tearing modes. Four transport threshold models, differing in the underlying competing mechanisms, are analyzed: collisional, convective, inertial, and rotational models. The collisional model constructed analytically is shown to coincide exactly with that calculated numerically; the reason is that the analytical temperature profile turns out to be the same as the numerical profile. The results obtained can be useful in developing the next generation of general threshold models. The first steps toward such models have already been made

  17. Patient-Specific Computational Modeling

    CERN Document Server

    Peña, Estefanía

    2012-01-01

    This book addresses patient-specific modeling. It integrates computational modeling, experimental procedures, imagine clinical segmentation and mesh generation with the finite element method (FEM) to solve problems in computational biomedicine and bioengineering. Specific areas of interest include cardiovascular problems, ocular and muscular systems and soft tissue modeling. Patient-specific modeling has been the subject of serious research over the last seven years and interest in the area is continually growing and this area is expected to further develop in the near future.

  18. Analytical approximate solutions for a general class of nonlinear delay differential equations.

    Science.gov (United States)

    Căruntu, Bogdan; Bota, Constantin

    2014-01-01

    We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.

  19. Computationally simple, analytic, closed form solution of the Coulomb self-interaction problem in Kohn Sham density functional theory

    International Nuclear Information System (INIS)

    Gonis, Antonios; Daene, Markus W.; Nicholson, Don M.; Stocks, George Malcolm

    2012-01-01

    We have developed and tested in terms of atomic calculations an exact, analytic and computationally simple procedure for determining the functional derivative of the exchange energy with respect to the density in the implementation of the Kohn Sham formulation of density functional theory (KS-DFT), providing an analytic, closed-form solution of the self-interaction problem in KS-DFT. We demonstrate the efficacy of our method through ground-state calculations of the exchange potential and energy for atomic He and Be atoms, and comparisons with experiment and the results obtained within the optimized effective potential (OEP) method.

  20. An analytical excitation model for an ionizing plasma

    NARCIS (Netherlands)

    Mullen, van der J.J.A.M.; Sijde, van der B.; Schram, D.C.

    1983-01-01

    From an analytical model for the population of high-lying excited levels in ionizing plasmas it appears that the distribution is a superposition of the equilibrium (Saha) value and an overpopulation. This overpopulation takes the form of a Maxwell distribution for free electrons. Experiments for He

  1. Analytic processor model for fast design-space exploration

    NARCIS (Netherlands)

    Jongerius, R.; Mariani, G.; Anghel, A.; Dittmann, G.; Vermij, E.; Corporaal, H.

    2015-01-01

    In this paper, we propose an analytic model that takes as inputs a) a parametric microarchitecture-independent characterization of the target workload, and b) a hardware configuration of the core and the memory hierarchy, and returns as output an estimation of processor-core performance. To validate

  2. Computing the stability of steady-state solutions of mathematical models of the electrical activity in the heart.

    Science.gov (United States)

    Tveito, Aslak; Skavhaug, Ola; Lines, Glenn T; Artebrant, Robert

    2011-08-01

    Instabilities in the electro-chemical resting state of the heart can generate ectopic waves that in turn can initiate arrhythmias. We derive methods for computing the resting state for mathematical models of the electro-chemical process underpinning a heartbeat, and we estimate the stability of the resting state by invoking the largest real part of the eigenvalues of a linearized model. The implementation of the methods is described and a number of numerical experiments illustrate the feasibility of the methods. In particular, we test the methods for problems where we can compare the solutions with analytical results, and problems where we have solutions computed by independent software. The software is also tested for a fairly realistic 3D model. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. The Oak Ridge Heat Pump Models: I. A Steady-State Computer Design Model of Air-to-Air Heat Pumps

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, S.K. Rice, C.K.

    1999-12-10

    The ORNL Heat Pump Design Model is a FORTRAN-IV computer program to predict the steady-state performance of conventional, vapor compression, electrically-driven, air-to-air heat pumps in both heating and cooling modes. This model is intended to serve as an analytical design tool for use by heat pump manufacturers, consulting engineers, research institutions, and universities in studies directed toward the improvement of heat pump performance. The Heat Pump Design Model allows the user to specify: system operating conditions, compressor characteristics, refrigerant flow control devices, fin-and-tube heat exchanger parameters, fan and indoor duct characteristics, and any of ten refrigerants. The model will compute: system capacity and COP (or EER), compressor and fan motor power consumptions, coil outlet air dry- and wet-bulb temperatures, air- and refrigerant-side pressure drops, a summary of the refrigerant-side states throughout the cycle, and overall compressor efficiencies and heat exchanger effectiveness. This report provides thorough documentation of how to use and/or modify the model. This is a revision of an earlier report containing miscellaneous corrections and information on availability and distribution of the model--including an interactive version.

  4. Analytical Model of Thermo-electrical Behaviour in Superconducting Resistive Core Cables

    CERN Document Server

    Calvi, M; Breschi, M; Coccoli, M; Granieri, P; Iriart, G; Lecci, F; Siemko, A

    2006-01-01

    High field superconducting Nb$_{3}$Sn accelerators magnets above 14 T, for future High Energy Physics applications, call for improvements in the design of the protection system against resistive transitions. The longitudinal quench propagation velocity (vq) is one of the parameters defining the requirements of the protection. Up to now vq has been always considered as a physical parameter defined by the operating conditions (the bath temperature, cooling conditions, the magnetic field and the over all current density) and the type of superconductor and stabilizer used. It is possible to enhance the quench propagation velocity by segregating a percent of the stabilizer into the core, although keeping the total amount constant and tuning the contact resistance between the superconducting strands and the core. Analytical model and computer simulations are presented to explain the phenomenon. The consequences with respect to minimum quench energy are evidenced and the strategy to optimize the cable designed is di...

  5. Computing dispersion curves of elastic/viscoelastic transversely-isotropic bone plates coupled with soft tissue and marrow using semi-analytical finite element (SAFE) method.

    Science.gov (United States)

    Nguyen, Vu-Hieu; Tran, Tho N H T; Sacchi, Mauricio D; Naili, Salah; Le, Lawrence H

    2017-08-01

    We present a semi-analytical finite element (SAFE) scheme for accurately computing the velocity dispersion and attenuation in a trilayered system consisting of a transversely-isotropic (TI) cortical bone plate sandwiched between the soft tissue and marrow layers. The soft tissue and marrow are mimicked by two fluid layers of finite thickness. A Kelvin-Voigt model accounts for the absorption of all three biological domains. The simulated dispersion curves are validated by the results from the commercial software DISPERSE and published literature. Finally, the algorithm is applied to a viscoelastic trilayered TI bone model to interpret the guided modes of an ex-vivo experimental data set from a bone phantom. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Scidac-Data: Enabling Data Driven Modeling of Exascale Computing

    Science.gov (United States)

    Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; Tsaris, Aristeidis; Norman, Andrew; Lyon, Adam; Ross, Robert

    2017-10-01

    The SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.

  7. An analytical model for nanoparticles concentration resulting from infusion into poroelastic brain tissue.

    Science.gov (United States)

    Pizzichelli, G; Di Michele, F; Sinibaldi, E

    2016-02-01

    We consider the infusion of a diluted suspension of nanoparticles (NPs) into poroelastic brain tissue, in view of relevant biomedical applications such as intratumoral thermotherapy. Indeed, the high impact of the related pathologies motivates the development of advanced therapeutic approaches, whose design also benefits from theoretical models. This study provides an analytical expression for the time-dependent NPs concentration during the infusion into poroelastic brain tissue, which also accounts for particle binding onto cells (by recalling relevant results from the colloid filtration theory). Our model is computationally inexpensive and, compared to fully numerical approaches, permits to explicitly elucidate the role of the involved physical aspects (tissue poroelasticity, infusion parameters, NPs physico-chemical properties, NP-tissue interactions underlying binding). We also present illustrative results based on parameters taken from the literature, by considering clinically relevant ranges for the infusion parameters. Moreover, we thoroughly assess the model working assumptions besides discussing its limitations. While not laying any claims of generality, our model can be used to support the development of more ambitious numerical approaches, towards the preliminary design of novel therapies based on NPs infusion into brain tissue. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Application of the Laplace transform method for computational modelling of radioactive decay series

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Deise L.; Damasceno, Ralf M.; Barros, Ricardo C. [Univ. do Estado do Rio de Janeiro (IME/UERJ) (Brazil). Programa de Pos-graduacao em Ciencias Computacionais

    2012-03-15

    It is well known that when spent fuel is removed from the core, it is still composed of considerable amount of radioactive elements with significant half-lives. Most actinides, in particular plutonium, fall into this category, and have to be safely disposed of. One solution is to store the long-lived spent fuel as it is, by encasing and burying it deep underground in a stable geological formation. This implies estimating the transmutation of these radioactive elements with time. Therefore, we describe in this paper the application of the Laplace transform technique in matrix formulation to analytically solve initial value problems that mathematically model radioactive decay series. Given the initial amount of each type of radioactive isotopes in the decay series, the computer code generates the amount at a given time of interest, or may plot a graph of the evolution in time of the amount of each type of isotopes in the series. This computer code, that we refer to as the LTRad{sub L} code, where L is the number of types of isotopes belonging to the series, was developed using the Scilab free platform for numerical computation and can model one segment or the entire chain of any of the three radioactive series existing on Earth today. Numerical results are given to typical model problems to illustrate the computer code efficiency and accuracy. (orig.)

  9. Application of the Laplace transform method for computational modelling of radioactive decay series

    International Nuclear Information System (INIS)

    Oliveira, Deise L.; Damasceno, Ralf M.; Barros, Ricardo C.

    2012-01-01

    It is well known that when spent fuel is removed from the core, it is still composed of considerable amount of radioactive elements with significant half-lives. Most actinides, in particular plutonium, fall into this category, and have to be safely disposed of. One solution is to store the long-lived spent fuel as it is, by encasing and burying it deep underground in a stable geological formation. This implies estimating the transmutation of these radioactive elements with time. Therefore, we describe in this paper the application of the Laplace transform technique in matrix formulation to analytically solve initial value problems that mathematically model radioactive decay series. Given the initial amount of each type of radioactive isotopes in the decay series, the computer code generates the amount at a given time of interest, or may plot a graph of the evolution in time of the amount of each type of isotopes in the series. This computer code, that we refer to as the LTRad L code, where L is the number of types of isotopes belonging to the series, was developed using the Scilab free platform for numerical computation and can model one segment or the entire chain of any of the three radioactive series existing on Earth today. Numerical results are given to typical model problems to illustrate the computer code efficiency and accuracy. (orig.)

  10. Analytical heat transfer modeling of a new radiation calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Obame Ndong, Elysée [Department of Industrial Engineering and Maintenance, University of Sciences and Technology of Masuku (USTM), BP 941 Franceville (Gabon); Grenoble Electrical Engineering Laboratory (G2Elab), University Grenoble Alpes and CNRS, G2Elab, F38000 Grenoble (France); Gallot-Lavallée, Olivier [Grenoble Electrical Engineering Laboratory (G2Elab), University Grenoble Alpes and CNRS, G2Elab, F38000 Grenoble (France); Aitken, Frédéric, E-mail: frederic.aitken@g2elab.grenoble-inp.fr [Grenoble Electrical Engineering Laboratory (G2Elab), University Grenoble Alpes and CNRS, G2Elab, F38000 Grenoble (France)

    2016-06-10

    Highlights: • Design of a new calorimeter for measuring heat power loss in electrical components. • The calorimeter can operate in a temperature range from −50 °C to 150 °C. • An analytical model of heat transfers for this new calorimeter is presented. • The theoretical sensibility of the new apparatus is estimated at ±1 mW. - Abstract: This paper deals with an analytical modeling of heat transfers simulating a new radiation calorimeter operating in a temperature range from −50 °C to 150 °C. The aim of this modeling is the evaluation of the feasibility and performance of the calorimeter by assessing the measurement of power losses of some electrical devices by radiation, the influence of the geometry and materials. Finally a theoretical sensibility of the new apparatus is estimated at ±1 mW. From these results the calorimeter has been successfully implemented and patented.

  11. Analytical heat transfer modeling of a new radiation calorimeter

    International Nuclear Information System (INIS)

    Obame Ndong, Elysée; Gallot-Lavallée, Olivier; Aitken, Frédéric

    2016-01-01

    Highlights: • Design of a new calorimeter for measuring heat power loss in electrical components. • The calorimeter can operate in a temperature range from −50 °C to 150 °C. • An analytical model of heat transfers for this new calorimeter is presented. • The theoretical sensibility of the new apparatus is estimated at ±1 mW. - Abstract: This paper deals with an analytical modeling of heat transfers simulating a new radiation calorimeter operating in a temperature range from −50 °C to 150 °C. The aim of this modeling is the evaluation of the feasibility and performance of the calorimeter by assessing the measurement of power losses of some electrical devices by radiation, the influence of the geometry and materials. Finally a theoretical sensibility of the new apparatus is estimated at ±1 mW. From these results the calorimeter has been successfully implemented and patented.

  12. A subjective and objective fuzzy-based analytical hierarchy process model for prioritization of lean product development practices

    Directory of Open Access Journals (Sweden)

    Daniel O. Aikhuele

    2017-06-01

    Full Text Available In this paper, a subjective and objective fuzzy-based Analytical Hierarchy Process (AHP model is proposed. The model which is based on a newly defined evaluation matrix replaces the fuzzy comparison matrix (FCM in the traditional fuzzy AHP model, which has been found ineffective and time-consuming when criteria/alternatives are increased. The main advantage of the new model is that it is straightforward and completely eliminates the repetitive adjustment of data that is common with the FCM in traditional AHP model. The model reduces the complete dependen-cy on human judgment in prioritization assessment since the weights values are solved automati-cally using the evaluation matrix and the modified priority weight formula in the proposed mod-el. By virtue of a numerical case study, the model is successfully applied in the determination of the implementation priorities of lean practices for a product development environment and com-pared with similar computational methods in the literature.

  13. Bubbles in inkjet printheads: analytical and numerical models

    NARCIS (Netherlands)

    Jeurissen, R.J.M.

    2009-01-01

    The phenomenon of nozzle failure of an inkjet printhead due to entrainment of air bubbles was studies using analytical and numerical models. The studied inkjet printheads consist of many channels in which an acoustic field is generated to eject a droplet. When an air bubble is entrained, it disrupts

  14. Bubbles in inkjet printheads : analytical and numerical models

    NARCIS (Netherlands)

    Jeurissen, R.J.M.

    2009-01-01

    The phenomenon of nozzle failure of an inkjet printhead due to entrainment of air bubbles was studies using analytical and numerical models. The studied inkjet printheads consist of many channels in which an acoustic field is generated to eject a droplet. When an air bubble is entrained, it disrupts

  15. A Model of Computation for Bit-Level Concurrent Computing and Programming: APEC

    Science.gov (United States)

    Ajiro, Takashi; Tsuchida, Kensei

    A concurrent model of computation and a language based on the model for bit-level operation are useful for developing asynchronous and concurrent programs compositionally, which frequently use bit-level operations. Some examples are programs for video games, hardware emulation (including virtual machines), and signal processing. However, few models and languages are optimized and oriented to bit-level concurrent computation. We previously developed a visual programming language called A-BITS for bit-level concurrent programming. The language is based on a dataflow-like model that computes using processes that provide serial bit-level operations and FIFO buffers connected to them. It can express bit-level computation naturally and develop compositionally. We then devised a concurrent computation model called APEC (Asynchronous Program Elements Connection) for bit-level concurrent computation. This model enables precise and formal expression of the process of computation, and a notion of primitive program elements for controlling and operating can be expressed synthetically. Specifically, the model is based on a notion of uniform primitive processes, called primitives, that have three terminals and four ordered rules at most, as well as on bidirectional communication using vehicles called carriers. A new notion is that a carrier moving between two terminals can briefly express some kinds of computation such as synchronization and bidirectional communication. The model's properties make it most applicable to bit-level computation compositionally, since the uniform computation elements are enough to develop components that have practical functionality. Through future application of the model, our research may enable further research on a base model of fine-grain parallel computer architecture, since the model is suitable for expressing massive concurrency by a network of primitives.

  16. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology.

    Science.gov (United States)

    Deodhar, Suruchi; Bisset, Keith R; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V

    2014-07-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity.

  17. Analytical model of internally coupled ears

    DEFF Research Database (Denmark)

    Vossen, Christine; Christensen-Dalsgaard, Jakob; Leo van Hemmen, J

    2010-01-01

    Lizards and many birds possess a specialized hearing mechanism: internally coupled ears where the tympanic membranes connect through a large mouth cavity so that the vibrations of the tympanic membranes influence each other. This coupling enhances the phase differences and creates amplitude...... additionally provides the opportunity to incorporate the effect of the asymmetrically attached columella, which leads to the activation of higher membrane vibration modes. Incorporating this effect, the analytical model can explain measurements taken from the tympanic membrane of a living lizard, for example...

  18. A Proposed Analytical Model for Integrated Pick-and-Sort Systems

    Directory of Open Access Journals (Sweden)

    Recep KIZILASLAN

    2013-11-01

    Full Text Available In this study we present an analytical approach for integration of order picking and sortation operations which are the most important, labour intensive and costly activity for warehouses. Main aim is to investigate order picking and sorting efficiencies under different design issues as a function of order wave size. Integrated analytical model is proposed to estimate the optimum order picking and order sortation efficiency. The model, which has been tested by simulations with different illustrative examples, calculates the optimum wave size that solves the trade-off between picking and sorting operations and makes the order picking and sortations efficiency maximum. Our model also allow system designer to predict the order picking and sorting capacity for different system configurations. This study presents an innovative approach for integrated warehouse operations.

  19. Programming Non-Trivial Algorithms in the Measurement Based Quantum Computation Model

    Energy Technology Data Exchange (ETDEWEB)

    Alsing, Paul [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Fanto, Michael [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Lott, Capt. Gordon [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Tison, Christoper C. [United States Air Force Research Laboratory, Wright-Patterson Air Force Base

    2014-01-01

    We provide a set of prescriptions for implementing a quantum circuit model algorithm as measurement based quantum computing (MBQC) algorithm1, 2 via a large cluster state. As means of illustration we draw upon our numerical modeling experience to describe a large graph state capable of searching a logical 8 element list (a non-trivial version of Grover's algorithm3 with feedforward). We develop several prescriptions based on analytic evaluation of cluster states and graph state equations which can be generalized into any circuit model operations. Such a resulting cluster state will be able to carry out the desired operation with appropriate measurements and feed forward error correction. We also discuss the physical implementation and the analysis of the principal 3-qubit entangling gate (Toffoli) required for a non-trivial feedforward realization of an 8-element Grover search algorithm.

  20. Cake filtration modeling: Analytical cake filtration model and filter medium characterization

    Energy Technology Data Exchange (ETDEWEB)

    Koch, Michael

    2008-05-15

    Cake filtration is a unit operation to separate solids from fluids in industrial processes. The build up of a filter cake is usually accompanied with a decrease in overall permeability over the filter leading to an increased pressure drop over the filter. For an incompressible filter cake that builds up on a homogeneous filter cloth, a linear pressure drop profile over time is expected for a constant fluid volume flow. However, experiments show curved pressure drop profiles, which are also attributed to inhomogeneities of the filter (filter medium and/or residual filter cake). In this work, a mathematical filter model is developed to describe the relationship between time and overall permeability. The model considers a filter with an inhomogeneous permeability and accounts for fluid mechanics by a one-dimensional formulation of Darcy's law and for the cake build up by solid continuity. The model can be solved analytically in the time domain. The analytic solution allows for the unambiguous inversion of the model to determine the inhomogeneous permeability from the time resolved overall permeability, e.g. pressure drop measurements. An error estimation of the method is provided by rewriting the model as convolution transformation. This method is applied to simulated and experimental pressure drop data of gas filters with textile filter cloths and various situations with non-uniform flow situations in practical problems are explored. A routine is developed to generate characteristic filter cycles from semi-continuous filter plant operation. The model is modified to investigate the impact of non-uniform dust concentrations. (author). 34 refs., 40 figs., 1 tab

  1. Analytical and numerical models of uranium ignition assisted by hydride formation

    International Nuclear Information System (INIS)

    Totemeier, T.C.; Hayes, S.L.

    1996-01-01

    Analytical and numerical models of uranium ignition assisted by the oxidation of uranium hydride are described. The models were developed to demonstrate that ignition of large uranium ingots could not occur as a result of possible hydride formation during storage. The thermodynamics-based analytical model predicted an overall 17 C temperature rise of the ingot due to hydride oxidation upon opening of the storage can in air. The numerical model predicted locally higher temperature increases at the surface; the transient temperature increase quickly dissipated. The numerical model was further used to determine conditions for which hydride oxidation does lead to ignition of uranium metal. Room temperature ignition only occurs for high hydride fractions in the nominally oxide reaction product and high specific surface areas of the uranium metal

  2. Symbolic-computation study of the perturbed nonlinear Schrodinger model in inhomogeneous optical fibers

    International Nuclear Information System (INIS)

    Tian Bo; Gao Yitian

    2005-01-01

    A realistic, inhomogeneous fiber in the optical communication systems can be described by the perturbed nonlinear Schrodinger model (also named as the normalized nonlinear Schrodinger model with periodically varying coefficients, dispersion managed nonlinear Schrodinger model or nonlinear Schrodinger model with variable coefficients). Hereby, we extend to this model a direct method, perform symbolic computation and obtain two families of the exact, analytic bright-solitonic solutions, with or without the chirp respectively. The parameters addressed include the shape of the bright soliton, soliton amplitude, inverse width of the soliton, chirp, frequency, center of the soliton and center of the phase of the soliton. Of optical and physical interests, we discuss some previously-published special cases of our solutions. Those solutions could help the future studies on the optical communication systems. ms

  3. International Conference on Computational Intelligence, Cyber Security, and Computational Models

    CERN Document Server

    Ramasamy, Vijayalakshmi; Sheen, Shina; Veeramani, C; Bonato, Anthony; Batten, Lynn

    2016-01-01

    This book aims at promoting high-quality research by researchers and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security, and Computational Models ICC3 2015 organized by PSG College of Technology, Coimbatore, India during December 17 – 19, 2015. This book enriches with innovations in broad areas of research like computational modeling, computational intelligence and cyber security. These emerging inter disciplinary research areas have helped to solve multifaceted problems and gained lot of attention in recent years. This encompasses theory and applications, to provide design, analysis and modeling of the aforementioned key areas.

  4. An analytically solvable model for rapid evolution of modular structure.

    Directory of Open Access Journals (Sweden)

    Nadav Kashtan

    2009-04-01

    Full Text Available Biological systems often display modularity, in the sense that they can be decomposed into nearly independent subsystems. Recent studies have suggested that modular structure can spontaneously emerge if goals (environments change over time, such that each new goal shares the same set of sub-problems with previous goals. Such modularly varying goals can also dramatically speed up evolution, relative to evolution under a constant goal. These studies were based on simulations of model systems, such as logic circuits and RNA structure, which are generally not easy to treat analytically. We present, here, a simple model for evolution under modularly varying goals that can be solved analytically. This model helps to understand some of the fundamental mechanisms that lead to rapid emergence of modular structure under modularly varying goals. In particular, the model suggests a mechanism for the dramatic speedup in evolution observed under such temporally varying goals.

  5. Analytical Model for LLC Resonant Converter With Variable Duty-Cycle Control

    DEFF Research Database (Denmark)

    Shen, Yanfeng; Wang, Huai; Blaabjerg, Frede

    2016-01-01

    are identified and discussed. The proposed model enables a better understanding of the operation characteristics and fast parameter design of the LLC converter, which otherwise cannot be achieved by the existing simulation based methods and numerical models. The results obtained from the proposed model......In LLC resonant converters, the variable duty-cycle control is usually combined with a variable frequency control to widen the gain range, improve the light-load efficiency, or suppress the inrush current during start-up. However, a proper analytical model for the variable duty-cycle controlled LLC...... converter is still not available due to the complexity of operation modes and the nonlinearity of steady-state equations. This paper makes the efforts to develop an analytical model for the LLC converter with variable duty-cycle control. All possible operation models and critical operation characteristics...

  6. Analytical Solution for the Anisotropic Rabi Model: Effects of Counter-Rotating Terms

    Science.gov (United States)

    Zhang, Guofeng; Zhu, Hanjie

    2015-03-01

    The anisotropic Rabi model, which was proposed recently, differs from the original Rabi model: the rotating and counter-rotating terms are governed by two different coupling constants. This feature allows us to vary the counter-rotating interaction independently and explore the effects of it on some quantum properties. In this paper, we eliminate the counter-rotating terms approximately and obtain the analytical energy spectrums and wavefunctions. These analytical results agree well with the numerical calculations in a wide range of the parameters including the ultrastrong coupling regime. In the weak counter-rotating coupling limit we find out that the counter-rotating terms can be considered as the shifts to the parameters of the Jaynes-Cummings model. This modification shows the validness of the rotating-wave approximation on the assumption of near-resonance and relatively weak coupling. Moreover, the analytical expressions of several physics quantities are also derived, and the results show the break-down of the U(1)-symmetry and the deviation from the Jaynes-Cummings model.

  7. Analytical calculation of detailed model parameters of cast resin dry-type transformers

    International Nuclear Information System (INIS)

    Eslamian, M.; Vahidi, B.; Hosseinian, S.H.

    2011-01-01

    Highlights: → In this paper high frequency behavior of cast resin dry-type transformers was simulated. → Parameters of detailed model were calculated using analytical method and compared with FEM results. → A lab transformer was constructed in order to compare theoretical and experimental results. -- Abstract: Non-flammable characteristic of cast resin dry-type transformers make them suitable for different kind of usages. This paper presents an analytical method of how to obtain parameters of detailed model of these transformers. The calculated parameters are compared and verified with the corresponding FEM results and if it was necessary, correction factors are introduced for modification of the analytical solutions. Transient voltages under full and chopped test impulses are calculated using the obtained detailed model. In order to validate the model, a setup was constructed for testing on high-voltage winding of cast resin dry-type transformer. The simulation results were compared with the experimental data measured from FRA and impulse tests.

  8. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  9. A two-dimensional analytical well model with applications to groundwater flow and convective transport modelling in the geosphere

    International Nuclear Information System (INIS)

    Chan, T.; Nakka, B.W.

    1994-12-01

    A two-dimensional analytical well model has been developed to describe steady groundwater flow in an idealized, confined aquifer intersected by a withdrawal well. The aquifer comprises a low-dipping fracture zone. The model is useful for making simple quantitative estimates of the transport of contaminants along groundwater pathways in the fracture zone to the well from an underground source that intercepts the fracture zone. This report documents the mathematical development of the analytical well model. It outlines the assumptions and method used to derive an exact analytical solution, which is verified by two other methods. It presents expressions for calculating quantities such as streamlines (groundwater flow paths), fractional volumetric flow rates, contaminant concentration in well water and minimum convective travel time to the well. In addition, this report presents the results of applying the analytical model to a site-specific conceptual model of the Whiteshell Research Area in southeastern Manitoba, Canada. This hydrogeological model includes the presence of a 20-m-thick, low-dipping (18 deg) fracture zone (LD1) that intercepts the horizon of a hypothetical disposal vault located at a depth of 500 m. A withdrawal well intercepts LD1 between the vault level and the ground surface. Predictions based on parameters and boundary conditions specific to LD1 are presented graphically. The analytical model has specific applications in the SYVAC geosphere model (GEONET) to calculate the fraction of a plume of contaminants moving up the fracture zone that is captured by the well, and to describe the drawdown in the hydraulic head in the fracture zone caused by the withdrawal well. (author). 16 refs., 6 tabs., 35 figs

  10. Benchmarking computational fluid dynamics models of lava flow simulation for hazard assessment, forecasting, and risk management

    Science.gov (United States)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.

    2017-01-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.

  11. Analytic Model Predictive Control of Uncertain Nonlinear Systems: A Fuzzy Adaptive Approach

    Directory of Open Access Journals (Sweden)

    Xiuyan Peng

    2015-01-01

    Full Text Available A fuzzy adaptive analytic model predictive control method is proposed in this paper for a class of uncertain nonlinear systems. Specifically, invoking the standard results from the Moore-Penrose inverse of matrix, the unmatched problem which exists commonly in input and output dimensions of systems is firstly solved. Then, recurring to analytic model predictive control law, combined with fuzzy adaptive approach, the fuzzy adaptive predictive controller synthesis for the underlying systems is developed. To further reduce the impact of fuzzy approximation error on the system and improve the robustness of the system, the robust compensation term is introduced. It is shown that by applying the fuzzy adaptive analytic model predictive controller the rudder roll stabilization system is ultimately uniformly bounded stabilized in the H-infinity sense. Finally, simulation results demonstrate the effectiveness of the proposed method.

  12. RAMSIM: A fast computer model for mean wind flow over hills

    Energy Technology Data Exchange (ETDEWEB)

    Corbett, J-F.

    2007-06-15

    The Riso Atmospheric Mixed Spectral-Integration Model (RAMSIM) is a micro-scale, linear flow model developed to quickly calculate the mean wind flow field over orography. It was designed to bridge the gap between WAsP and similar models that are fast but insufficiently accurate over steep slopes, and non-linear CFD models that are accurate but too computationally expensive for routine use on a PC. RAMSIM is governed by the RANS and E-{epsilon} turbulence closure equations, expressed in non-Cartesian coordinates. A terrain-following coordinate system is created from a simple analytical expression. The equations are linearized by a perturbation expansion about the flat-terrain case. The first-order equations, representing the spatial correction due to the presence of orography, are Fourier-transformed analytically in the two horizontal dimensions. The pressure and horizontal velocity components are eliminated, resulting in a set of four ordinary differential equations (ODEs). RAMSIM is currently implemented and tested in two-dimensional space; a 3D version has been formulated but not yet implemented. In the 2D case, there are only three ODEs, depending on only two non-dimensional parameters. This is exploited by solving the ODEs by Runge-Kutta integration for all useful combinations of these parameters, and storing the results in look-up tables (LUT). The flow field over any given orography is then quickly obtained by interpolating from the LUTs and scaling the value of the flow variables for each wavenumber component of the orography, and returning to real space by inverse Fourier transform. RAMSIM was tested against measurements, as well as other authors' flow models, in four test cases: two laboratory flows over idealized terrain, and two field experiments. RAMSIM calculations generally agree with measurements over upward slopes and hilltops, but overestimate the speed very near the ground at hilltops. RAMSIM appears to have an edge over other linear models

  13. LitPathExplorer: a confidence-based visual text analytics tool for exploring literature-enriched pathway models.

    Science.gov (United States)

    Soto, Axel J; Zerva, Chrysoula; Batista-Navarro, Riza; Ananiadou, Sophia

    2018-04-15

    Pathway models are valuable resources that help us understand the various mechanisms underpinning complex biological processes. Their curation is typically carried out through manual inspection of published scientific literature to find information relevant to a model, which is a laborious and knowledge-intensive task. Furthermore, models curated manually cannot be easily updated and maintained with new evidence extracted from the literature without automated support. We have developed LitPathExplorer, a visual text analytics tool that integrates advanced text mining, semi-supervised learning and interactive visualization, to facilitate the exploration and analysis of pathway models using statements (i.e. events) extracted automatically from the literature and organized according to levels of confidence. LitPathExplorer supports pathway modellers and curators alike by: (i) extracting events from the literature that corroborate existing models with evidence; (ii) discovering new events which can update models; and (iii) providing a confidence value for each event that is automatically computed based on linguistic features and article metadata. Our evaluation of event extraction showed a precision of 89% and a recall of 71%. Evaluation of our confidence measure, when used for ranking sampled events, showed an average precision ranging between 61 and 73%, which can be improved to 95% when the user is involved in the semi-supervised learning process. Qualitative evaluation using pair analytics based on the feedback of three domain experts confirmed the utility of our tool within the context of pathway model exploration. LitPathExplorer is available at http://nactem.ac.uk/LitPathExplorer_BI/. sophia.ananiadou@manchester.ac.uk. Supplementary data are available at Bioinformatics online.

  14. Improved Analytical Model of a Permanent-Magnet Brushless DC Motor

    NARCIS (Netherlands)

    Kumar, P.; Bauer, P.

    2008-01-01

    In this paper, we develop a comprehensive model of a permanent-magnet brushless DC (BLDC) motor. An analytical model for determining instantaneous air-gap field density is developed. This instantaneous field distribution can be further used to determine the cogging torque, induced back electromotive

  15. Analytical development and optimization of a graphene–solution interface capacitance model

    Directory of Open Access Journals (Sweden)

    Hediyeh Karimi

    2014-05-01

    Full Text Available Graphene, which as a new carbon material shows great potential for a range of applications because of its exceptional electronic and mechanical properties, becomes a matter of attention in these years. The use of graphene in nanoscale devices plays an important role in achieving more accurate and faster devices. Although there are lots of experimental studies in this area, there is a lack of analytical models. Quantum capacitance as one of the important properties of field effect transistors (FETs is in our focus. The quantum capacitance of electrolyte-gated transistors (EGFETs along with a relevant equivalent circuit is suggested in terms of Fermi velocity, carrier density, and fundamental physical quantities. The analytical model is compared with the experimental data and the mean absolute percentage error (MAPE is calculated to be 11.82. In order to decrease the error, a new function of E composed of α and β parameters is suggested. In another attempt, the ant colony optimization (ACO algorithm is implemented for optimization and development of an analytical model to obtain a more accurate capacitance model. To further confirm this viewpoint, based on the given results, the accuracy of the optimized model is more than 97% which is in an acceptable range of accuracy.

  16. Analytical Solution for the Anisotropic Rabi Model: Effects of Counter-Rotating Terms

    OpenAIRE

    Zhang, Guofeng; Zhu, Hanjie

    2015-01-01

    The anisotropic Rabi model, which was proposed recently, differs from the original Rabi model: the rotating and counter-rotating terms are governed by two different coupling constants. This feature allows us to vary the counter-rotating interaction independently and explore the effects of it on some quantum properties. In this paper, we eliminate the counter-rotating terms approximately and obtain the analytical energy spectrums and wavefunctions. These analytical results agree well with the ...

  17. Analytical Modeling of the High Strain Rate Deformation of Polymer Matrix Composites

    Science.gov (United States)

    Goldberg, Robert K.; Roberts, Gary D.; Gilat, Amos

    2003-01-01

    The results presented here are part of an ongoing research program to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to high strain rate impact loads. State variable constitutive equations originally developed for metals have been modified in order to model the nonlinear, strain rate dependent deformation of polymeric matrix materials. To account for the effects of hydrostatic stresses, which are significant in polymers, the classical 5 plasticity theory definitions of effective stress and effective plastic strain are modified by applying variations of the Drucker-Prager yield criterion. To verify the revised formulation, the shear and tensile deformation of a representative toughened epoxy is analyzed across a wide range of strain rates (from quasi-static to high strain rates) and the results are compared to experimentally obtained values. For the analyzed polymers, both the tensile and shear stress-strain curves computed using the analytical model correlate well with values obtained through experimental tests. The polymer constitutive equations are implemented within a strength of materials based micromechanics method to predict the nonlinear, strain rate dependent deformation of polymer matrix composites. In the micromechanics, the unit cell is divided up into a number of independently analyzed slices, and laminate theory is then applied to obtain the effective deformation of the unit cell. The composite mechanics are verified by analyzing the deformation of a representative polymer matrix composite (composed using the representative polymer analyzed for the correlation of the polymer constitutive equations) for several fiber orientation angles across a variety of strain rates. The computed values compare favorably to experimentally obtained results.

  18. Performance study of Active Queue Management methods: Adaptive GRED, REDD, and GRED-Linear analytical model

    Directory of Open Access Journals (Sweden)

    Hussein Abdel-jaber

    2015-10-01

    Full Text Available Congestion control is one of the hot research topics that helps maintain the performance of computer networks. This paper compares three Active Queue Management (AQM methods, namely, Adaptive Gentle Random Early Detection (Adaptive GRED, Random Early Dynamic Detection (REDD, and GRED Linear analytical model with respect to different performance measures. Adaptive GRED and REDD are implemented based on simulation, whereas GRED Linear is implemented as a discrete-time analytical model. Several performance measures are used to evaluate the effectiveness of the compared methods mainly mean queue length, throughput, average queueing delay, overflow packet loss probability, and packet dropping probability. The ultimate aim is to identify the method that offers the highest satisfactory performance in non-congestion or congestion scenarios. The first comparison results that are based on different packet arrival probability values show that GRED Linear provides better mean queue length; average queueing delay and packet overflow probability than Adaptive GRED and REDD methods in the presence of congestion. Further and using the same evaluation measures, Adaptive GRED offers a more satisfactory performance than REDD when heavy congestion is present. When the finite capacity of queue values varies the GRED Linear model provides the highest satisfactory performance with reference to mean queue length and average queueing delay and all the compared methods provide similar throughput performance. However, when the finite capacity value is large, the compared methods have similar results in regard to probabilities of both packet overflowing and packet dropping.

  19. Organizational Models for Big Data and Analytics

    Directory of Open Access Journals (Sweden)

    Robert L. Grossman

    2014-04-01

    Full Text Available In this article, we introduce a framework for determining how analytics capability should be distributed within an organization. Our framework stresses the importance of building a critical mass of analytics staff, centralizing or decentralizing the analytics staff to support business processes, and establishing an analytics governance structure to ensure that analytics processes are supported by the organization as a whole.

  20. High Z neoclassical transport: Application and limitation of analytical formulae for modelling JET experimental parameters

    Science.gov (United States)

    Breton, S.; Casson, F. J.; Bourdelle, C.; Angioni, C.; Belli, E.; Camenen, Y.; Citrin, J.; Garbet, X.; Sarazin, Y.; Sertoli, M.; JET Contributors

    2018-01-01

    Heavy impurities, such as tungsten (W), can exhibit strongly poloidally asymmetric density profiles in rotating or radio frequency heated plasmas. In the metallic environment of JET, the poloidal asymmetry of tungsten enhances its neoclassical transport up to an order of magnitude, so that neoclassical convection dominates over turbulent transport in the core. Accounting for asymmetries in neoclassical transport is hence necessary in the integrated modeling framework. The neoclassical drift kinetic code, NEO [E. Belli and J. Candy, Plasma Phys. Controlled Fusion P50, 095010 (2008)], includes the impact of poloidal asymmetries on W transport. However, the computational cost required to run NEO slows down significantly integrated modeling. A previous analytical formulation to describe heavy impurity neoclassical transport in the presence of poloidal asymmetries in specific collisional regimes [C. Angioni and P. Helander, Plasma Phys. Controlled Fusion 56, 124001 (2014)] is compared in this work to numerical results from NEO. Within the domain of validity of the formula, the factor for reducing the temperature screening due to poloidal asymmetries had to be empirically adjusted. After adjustment, the modified formula can reproduce NEO results outside of its definition domain, with some limitations: When main ions are in the banana regime, the formula reproduces NEO results whatever the collisionality regime of impurities, provided that the poloidal asymmetry is not too large. However, for very strong poloidal asymmetries, agreement requires impurities in the Pfirsch-Schlüter regime. Within the JETTO integrated transport code, the analytical formula combined with the poloidally symmetric neoclassical code NCLASS [W. A. Houlberg et al., Phys. Plasmas 4, 3230 (1997)] predicts the same tungsten profile as NEO in certain cases, while saving a factor of one thousand in computer time, which can be useful in scoping studies. The parametric dependencies of the temperature

  1. A workflow learning model to improve geovisual analytics utility.

    Science.gov (United States)

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A

    2009-01-01

    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on

  2. The Analytic Information Warehouse (AIW): a platform for analytics using electronic health record data.

    Science.gov (United States)

    Post, Andrew R; Kurc, Tahsin; Cholleti, Sharath; Gao, Jingjing; Lin, Xia; Bornstein, William; Cantrell, Dedra; Levine, David; Hohmann, Sam; Saltz, Joel H

    2013-06-01

    To create an analytics platform for specifying and detecting clinical phenotypes and other derived variables in electronic health record (EHR) data for quality improvement investigations. We have developed an architecture for an Analytic Information Warehouse (AIW). It supports transforming data represented in different physical schemas into a common data model, specifying derived variables in terms of the common model to enable their reuse, computing derived variables while enforcing invariants and ensuring correctness and consistency of data transformations, long-term curation of derived data, and export of derived data into standard analysis tools. It includes software that implements these features and a computing environment that enables secure high-performance access to and processing of large datasets extracted from EHRs. We have implemented and deployed the architecture in production locally. The software is available as open source. We have used it as part of hospital operations in a project to reduce rates of hospital readmission within 30days. The project examined the association of over 100 derived variables representing disease and co-morbidity phenotypes with readmissions in 5years of data from our institution's clinical data warehouse and the UHC Clinical Database (CDB). The CDB contains administrative data from over 200 hospitals that are in academic medical centers or affiliated with such centers. A widely available platform for managing and detecting phenotypes in EHR data could accelerate the use of such data in quality improvement and comparative effectiveness studies. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. TrajAnalytics: An Open-Source, Web-Based Visual Analytics Software of Urban Trajectory Data

    OpenAIRE

    Zhao, Ye

    2018-01-01

    We developed a software system named TrajAnalytics, which explicitly supports interactive visual analytics of the emerging trajectory data. It offers data management capability and support various data queries by leveraging web-based computing platforms. It allows users to visually conduct queries and make sense of massive trajectory data.

  4. Molecular modeling of polymer composite-analyte interactions in electronic nose sensors

    Science.gov (United States)

    Shevade, A. V.; Ryan, M. A.; Homer, M. L.; Manfreda, A. M.; Zhou, H.; Manatt, K. S.

    2003-01-01

    We report a molecular modeling study to investigate the polymer-carbon black (CB) composite-analyte interactions in resistive sensors. These sensors comprise the JPL electronic nose (ENose) sensing array developed for monitoring breathing air in human habitats. The polymer in the composite is modeled based on its stereoisomerism and sequence isomerism, while the CB is modeled as uncharged naphthalene rings with no hydrogens. The Dreiding 2.21 force field is used for the polymer, solvent molecules and graphite parameters are assigned to the carbon black atoms. A combination of molecular mechanics (MM) and molecular dynamics (NPT-MD and NVT-MD) techniques are used to obtain the equilibrium composite structure by inserting naphthalene rings in the polymer matrix. Polymers considered for this work include poly(4-vinylphenol), polyethylene oxide, and ethyl cellulose. Analytes studied are representative of both inorganic and organic compounds. The results are analyzed for the composite microstructure by calculating the radial distribution profiles as well as for the sensor response by predicting the interaction energies of the analytes with the composites. c2003 Elsevier Science B.V. All rights reserved.

  5. Vibration Based Diagnosis for Planetary Gearboxes Using an Analytical Model

    Directory of Open Access Journals (Sweden)

    Liu Hong

    2016-01-01

    Full Text Available The application of conventional vibration based diagnostic techniques to planetary gearboxes is a challenge because of the complexity of frequency components in the measured spectrum, which is the result of relative motions between the rotary planets and the fixed accelerometer. In practice, since the fault signatures are usually contaminated by noises and vibrations from other mechanical components of gearboxes, the diagnostic efficacy may further deteriorate. Thus, it is essential to develop a novel vibration based scheme to diagnose gear failures for planetary gearboxes. Following a brief literature review, the paper begins with the introduction of an analytical model of planetary gear-sets developed by the authors in previous works, which can predict the distinct behaviors of fault introduced sidebands. This analytical model is easy to implement because the only prerequisite information is the basic geometry of the planetary gear-set. Afterwards, an automated diagnostic scheme is proposed to cope with the challenges associated with the characteristic configuration of planetary gearboxes. The proposed vibration based scheme integrates the analytical model, a denoising algorithm, and frequency domain indicators into one synergistic system for the detection and identification of damaged gear teeth in planetary gearboxes. Its performance is validated with the dynamic simulations and the experimental data from a planetary gearbox test rig.

  6. Analytical Modeling of Triple-Metal Hetero-Dielectric DG SON TFET

    Science.gov (United States)

    Mahajan, Aman; Dash, Dinesh Kumar; Banerjee, Pritha; Sarkar, Subir Kumar

    2018-02-01

    In this paper, a 2-D analytical model of triple-metal hetero-dielectric DG TFET is presented by combining the concepts of triple material gate engineering and hetero-dielectric engineering. Three metals with different work functions are used as both front- and back gate electrodes to modulate the barrier at source/channel and channel/drain interface. In addition to this, front gate dielectric consists of high-K HfO2 at source end and low-K SiO2 at drain side, whereas back gate dielectric is replaced by air to further improve the ON current of the device. Surface potential and electric field of the proposed device are formulated solving 2-D Poisson's equation and Young's approximation. Based on this electric field expression, tunneling current is obtained by using Kane's model. Several device parameters are varied to examine the behavior of the proposed device. The analytical model is validated with TCAD simulation results for proving the accuracy of our proposed model.

  7. HTS axial flux induction motor with analytic and FEA modeling

    Energy Technology Data Exchange (ETDEWEB)

    Li, S., E-mail: alexlee.zn@gmail.com; Fan, Y.; Fang, J.; Qin, W.; Lv, G.; Li, J.H.

    2013-11-15

    Highlights: •A high temperature superconductor axial flux induction motor and a novel maglev scheme are presented. •Analytic method and finite element method have been adopted to model the motor and to calculate the force. •Magnetic field distribution in HTS coil is calculated by analytic method. •An effective method to improve the critical current of HTS coil is presented. •AC losses of HTS coils in the HTS axial flux induction motor are estimated and tested. -- Abstract: This paper presents a high-temperature superconductor (HTS) axial-flux induction motor, which can output levitation force and torque simultaneously. In order to analyze the character of the force, analytic method and finite element method are adopted to model the motor. To make sure the HTS can carry sufficiently large current and work well, the magnetic field distribution in HTS coil is calculated. An effective method to improve the critical current of HTS coil is presented. Then, AC losses in HTS windings in the motor are estimated and tested.

  8. HTS axial flux induction motor with analytic and FEA modeling

    International Nuclear Information System (INIS)

    Li, S.; Fan, Y.; Fang, J.; Qin, W.; Lv, G.; Li, J.H.

    2013-01-01

    Highlights: •A high temperature superconductor axial flux induction motor and a novel maglev scheme are presented. •Analytic method and finite element method have been adopted to model the motor and to calculate the force. •Magnetic field distribution in HTS coil is calculated by analytic method. •An effective method to improve the critical current of HTS coil is presented. •AC losses of HTS coils in the HTS axial flux induction motor are estimated and tested. -- Abstract: This paper presents a high-temperature superconductor (HTS) axial-flux induction motor, which can output levitation force and torque simultaneously. In order to analyze the character of the force, analytic method and finite element method are adopted to model the motor. To make sure the HTS can carry sufficiently large current and work well, the magnetic field distribution in HTS coil is calculated. An effective method to improve the critical current of HTS coil is presented. Then, AC losses in HTS windings in the motor are estimated and tested

  9. Analytical synthetic methods of solution of neutron transport equation with diffusion theory approaches energy multigroup

    International Nuclear Information System (INIS)

    Moraes, Pedro Gabriel B.; Leite, Michel C.A.; Barros, Ricardo C.

    2013-01-01

    In this work we developed a software to model and generate results in tables and graphs of one-dimensional neutron transport problems in multi-group formulation of energy. The numerical method we use to solve the problem of neutron diffusion is analytic, thus eliminating the truncation errors that appear in classical numerical methods, e.g., the method of finite differences. This numerical analytical method increases the computational efficiency, since they are not refined spatial discretization necessary because for any spatial discretization grids used, the numerical result generated for the same point of the domain remains unchanged unless the rounding errors of computational finite arithmetic. We chose to develop a computational application in MatLab platform for numerical computation and program interface is simple and easy with knobs. We consider important to model this neutron transport problem with a fixed source in the context of shielding calculations of radiation that protects the biosphere, and could be sensitive to ionizing radiation

  10. Performance Evaluation of a Mobile Wireless Computational Grid ...

    African Journals Online (AJOL)

    PROF. OLIVER OSUAGWA

    2015-12-01

    Dec 1, 2015 ... Abstract. This work developed and simulated a mathematical model for a mobile wireless computational Grid ... which mobile modes will process the tasks .... evaluation are analytical modelling, simulation ... MATLAB 7.10.0.

  11. Analytical maximum-likelihood method to detect patterns in real networks

    International Nuclear Information System (INIS)

    Squartini, Tiziano; Garlaschelli, Diego

    2011-01-01

    In order to detect patterns in real networks, randomized graph ensembles that preserve only part of the topology of an observed network are systematically used as fundamental null models. However, the generation of them is still problematic. Existing approaches are either computationally demanding and beyond analytic control or analytically accessible but highly approximate. Here, we propose a solution to this long-standing problem by introducing a fast method that allows one to obtain expectation values and standard deviations of any topological property analytically, for any binary, weighted, directed or undirected network. Remarkably, the time required to obtain the expectation value of any property analytically across the entire graph ensemble is as short as that required to compute the same property using the adjacency matrix of the single original network. Our method reveals that the null behavior of various correlation properties is different from what was believed previously, and is highly sensitive to the particular network considered. Moreover, our approach shows that important structural properties (such as the modularity used in community detection problems) are currently based on incorrect expressions, and provides the exact quantities that should replace them.

  12. COMPUTATIONAL MODELS FOR SUSTAINABLE DEVELOPMENT

    OpenAIRE

    Monendra Grover; Rajesh Kumar; Tapan Kumar Mondal; S. Rajkumar

    2011-01-01

    Genetic erosion is a serious problem and computational models have been developed to prevent it. The computational modeling in this field not only includes (terrestrial) reserve design, but also decision modeling for related problems such as habitat restoration, marine reserve design, and nonreserve approaches to conservation management. Models have been formulated for evaluating tradeoffs between socioeconomic, biophysical, and spatial criteria in establishing marine reserves. The percolatio...

  13. Learning, Learning Analytics, Activity Visualisation and Open learner Model

    DEFF Research Database (Denmark)

    Bull, Susan; Kickmeier-Rust, Michael; Vatrapu, Ravi

    2013-01-01

    This paper draws on visualisation approaches in learning analytics, considering how classroom visualisations can come together in practice. We suggest an open learner model in situations where many tools and activity visualisations produce more visual information than can be readily interpreted....

  14. A semi-analytic model of magnetized liner inertial fusion

    Energy Technology Data Exchange (ETDEWEB)

    McBride, Ryan D.; Slutz, Stephen A. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)

    2015-05-15

    Presented is a semi-analytic model of magnetized liner inertial fusion (MagLIF). This model accounts for several key aspects of MagLIF, including: (1) preheat of the fuel (optionally via laser absorption); (2) pulsed-power-driven liner implosion; (3) liner compressibility with an analytic equation of state, artificial viscosity, internal magnetic pressure, and ohmic heating; (4) adiabatic compression and heating of the fuel; (5) radiative losses and fuel opacity; (6) magnetic flux compression with Nernst thermoelectric losses; (7) magnetized electron and ion thermal conduction losses; (8) end losses; (9) enhanced losses due to prescribed dopant concentrations and contaminant mix; (10) deuterium-deuterium and deuterium-tritium primary fusion reactions for arbitrary deuterium to tritium fuel ratios; and (11) magnetized α-particle fuel heating. We show that this simplified model, with its transparent and accessible physics, can be used to reproduce the general 1D behavior presented throughout the original MagLIF paper [S. A. Slutz et al., Phys. Plasmas 17, 056303 (2010)]. We also discuss some important physics insights gained as a result of developing this model, such as the dependence of radiative loss rates on the radial fraction of the fuel that is preheated.

  15. Ranked retrieval of Computational Biology models.

    Science.gov (United States)

    Henkel, Ron; Endler, Lukas; Peters, Andre; Le Novère, Nicolas; Waltemath, Dagmar

    2010-08-11

    The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models.

  16. Analytical modeling of glucose biosensors based on carbon nanotubes.

    Science.gov (United States)

    Pourasl, Ali H; Ahmadi, Mohammad Taghi; Rahmani, Meisam; Chin, Huei Chaeng; Lim, Cheng Siong; Ismail, Razali; Tan, Michael Loong Peng

    2014-01-15

    In recent years, carbon nanotubes have received widespread attention as promising carbon-based nanoelectronic devices. Due to their exceptional physical, chemical, and electrical properties, namely a high surface-to-volume ratio, their enhanced electron transfer properties, and their high thermal conductivity, carbon nanotubes can be used effectively as electrochemical sensors. The integration of carbon nanotubes with a functional group provides a good and solid support for the immobilization of enzymes. The determination of glucose levels using biosensors, particularly in the medical diagnostics and food industries, is gaining mass appeal. Glucose biosensors detect the glucose molecule by catalyzing glucose to gluconic acid and hydrogen peroxide in the presence of oxygen. This action provides high accuracy and a quick detection rate. In this paper, a single-wall carbon nanotube field-effect transistor biosensor for glucose detection is analytically modeled. In the proposed model, the glucose concentration is presented as a function of gate voltage. Subsequently, the proposed model is compared with existing experimental data. A good consensus between the model and the experimental data is reported. The simulated data demonstrate that the analytical model can be employed with an electrochemical glucose sensor to predict the behavior of the sensing mechanism in biosensors.

  17. SPICE compatible analytical electron mobility model for biaxial strained-Si-MOSFETs

    Energy Technology Data Exchange (ETDEWEB)

    Chaudhry, Amit; Sangwan, S. [UIET, Panjab University, Chandigarh (India); Roy, J. N., E-mail: amit_chaudhry01@yahoo.com [Solar Semiconductro Pvt. Ltd, Hyderabad (India)

    2011-05-15

    This paper describes an analytical model for bulk electron mobility in strained-Si layers as a function of strain. Phonon scattering, columbic scattering and surface roughness scattering are included to analyze the full mobility model. Analytical explicit calculations of all of the parameters to accurately estimate the electron mobility have been made. The results predict an increase in the electron mobility with the application of biaxial strain as also predicted from the basic theory of strain physics of metal oxide semiconductor (MOS) devices. The results have also been compared with numerically reported results and show good agreement. (semiconductor devices)

  18. SPICE compatible analytical electron mobility model for biaxial strained-Si-MOSFETs

    International Nuclear Information System (INIS)

    Chaudhry, Amit; Sangwan, S.; Roy, J. N.

    2011-01-01

    This paper describes an analytical model for bulk electron mobility in strained-Si layers as a function of strain. Phonon scattering, columbic scattering and surface roughness scattering are included to analyze the full mobility model. Analytical explicit calculations of all of the parameters to accurately estimate the electron mobility have been made. The results predict an increase in the electron mobility with the application of biaxial strain as also predicted from the basic theory of strain physics of metal oxide semiconductor (MOS) devices. The results have also been compared with numerically reported results and show good agreement. (semiconductor devices)

  19. A semi-analytical study of positive corona discharge in wire–plane electrode configuration

    International Nuclear Information System (INIS)

    Yanallah, K; Pontiga, F; Chen, J H

    2013-01-01

    Wire-to-plane positive corona discharge in air has been studied using an analytical model of two species (electrons and positive ions). The spatial distributions of electric field and charged species are obtained by integrating Gauss's law and the continuity equations of species along the Laplacian field lines. The experimental values of corona current intensity and applied voltage, together with Warburg's law, have been used to formulate the boundary condition for the electron density on the corona wire. To test the accuracy of the model, the approximate electric field distribution has been compared with the exact numerical solution obtained from a finite element analysis. A parametrical study of wire-to-plane corona discharge has then been undertaken using the approximate semi-analytical solutions. Thus, the spatial distributions of electric field and charged particles have been computed for different values of the gas pressure, wire radius and electrode separation. Also, the two dimensional distribution of ozone density has been obtained using a simplified plasma chemistry model. The approximate semi-analytical solutions can be evaluated in a negligible computational time, yet provide precise estimates of corona discharge variables. (paper)

  20. A semi-analytical study of positive corona discharge in wire-plane electrode configuration

    Science.gov (United States)

    Yanallah, K.; Pontiga, F.; Chen, J. H.

    2013-08-01

    Wire-to-plane positive corona discharge in air has been studied using an analytical model of two species (electrons and positive ions). The spatial distributions of electric field and charged species are obtained by integrating Gauss's law and the continuity equations of species along the Laplacian field lines. The experimental values of corona current intensity and applied voltage, together with Warburg's law, have been used to formulate the boundary condition for the electron density on the corona wire. To test the accuracy of the model, the approximate electric field distribution has been compared with the exact numerical solution obtained from a finite element analysis. A parametrical study of wire-to-plane corona discharge has then been undertaken using the approximate semi-analytical solutions. Thus, the spatial distributions of electric field and charged particles have been computed for different values of the gas pressure, wire radius and electrode separation. Also, the two dimensional distribution of ozone density has been obtained using a simplified plasma chemistry model. The approximate semi-analytical solutions can be evaluated in a negligible computational time, yet provide precise estimates of corona discharge variables.

  1. Analytic regularization of the Yukawa model at finite temperature

    International Nuclear Information System (INIS)

    Malbouisson, A.P.C.; Svaiter, N.F.; Svaiter, B.F.

    1996-07-01

    It is analysed the one-loop fermionic contribution for the scalar effective potential in the temperature dependent Yukawa model. Ir order to regularize the model a mix between dimensional and analytic regularization procedures is used. It is found a general expression for the fermionic contribution in arbitrary spacetime dimension. It is also found that in D = 3 this contribution is finite. (author). 19 refs

  2. An analytical turn-on power loss model for 650-V GaN eHEMTs

    DEFF Research Database (Denmark)

    Shen, Yanfeng; Wang, Huai; Shen, Zhan

    2018-01-01

    This paper proposes an improved analytical turn-on power loss model for 650-V GaN eHEMTs. The static characteristics, i.e., the parasitic capacitances and transconductance, are firstly modeled. Then the turn-on process is divided into multiple stages and analyzed in detail; as results, the time-d......-domain solutions to the drain-source voltage and drain current are obtained. Finally, double-pulse tests are conducted to verify the proposed power loss model. This analytical model enables an accurate and fast switching behavior characterization and power loss prediction.......This paper proposes an improved analytical turn-on power loss model for 650-V GaN eHEMTs. The static characteristics, i.e., the parasitic capacitances and transconductance, are firstly modeled. Then the turn-on process is divided into multiple stages and analyzed in detail; as results, the time...

  3. Use of information technologies in teaching course "Analytical geometry" in higher schools on example of software "ANALYTICAL GEOMETRY"

    OpenAIRE

    V. B. Grigorieva

    2009-01-01

    In article are considered the methodical questions of using of computer technologies, for example, the software "Analytical geometry", in process of teaching course of analytical geometry in the higher school.

  4. A Comparison of Analytical and Numerical Methods for Modeling Dissolution and Other Reactions in Transport Limited Systems

    Science.gov (United States)

    Hochstetler, D. L.; Kitanidis, P. K.

    2009-12-01

    Modeling the transport of reactive species is a computationally demanding problem, especially in complex subsurface media, where it is crucial to improve understanding of geochemical processes and the fate of groundwater contaminants. In most of these systems, reactions are inherently fast and actual rates of transformations are limited by the slower physical transport mechanisms. There have been efforts to reformulate multi-component reactive transport problems into systems that are simpler and less demanding to solve. These reformulations include defining conservative species and decoupling of reactive transport equations so that fewer of them must be solved, leaving mostly conservative equations for transport [e.g., De Simoni et al., 2005; De Simoni et al., 2007; Kräutle and Knabner, 2007; Molins et al., 2004]. Complex and computationally cumbersome numerical codes used to solve such problems have also caused De Simoni et al. [2005] to develop more manageable analytical solutions. Furthermore, this work evaluates reaction rates and has reaffirmed that the mixing rate,▽TuD▽u, where u is a solute concentration and D is the dispersion tensor, as defined by Kitanidis [1994], is an important and sometimes dominant factor in determining reaction rates. Thus, mixing of solutions is often reaction-limiting. We will present results from analytical and computational modeling of multi-component reactive-transport problems. The results have applications to dissolution of solid boundaries (e.g., calcite), dissolution of non-aqueous phase liquids (NAPLs) in separate phases, and mixing of saltwater and freshwater (e.g. saltwater intrusion in coastal carbonate aquifers). We quantify reaction rates, compare numerical and analytical results, and analyze under what circumstances which approach is most effective for a given problem. References: DeSimoni, M., et al. (2005), A procedure for the solution of multicomponent reactive transport problems, Water Resources Research, 41

  5. Analytical Model for Diffusive Evaporation of Sessile Droplets Coupled with Interfacial Cooling Effect.

    Science.gov (United States)

    Nguyen, Tuan A H; Biggs, Simon R; Nguyen, Anh V

    2018-05-30

    Current analytical models for sessile droplet evaporation do not consider the nonuniform temperature field within the droplet and can overpredict the evaporation by 20%. This deviation can be attributed to a significant temperature drop due to the release of the latent heat of evaporation along the air-liquid interface. We report, for the first time, an analytical solution of the sessile droplet evaporation coupled with this interfacial cooling effect. The two-way coupling model of the quasi-steady thermal diffusion within the droplet and the quasi-steady diffusion-controlled droplet evaporation is conveniently solved in the toroidal coordinate system by applying the method of separation of variables. Our new analytical model for the coupled vapor concentration and temperature fields is in the closed form and is applicable for a full range of spherical-cap shape droplets of different contact angles and types of fluids. Our analytical results are uniquely quantified by a dimensionless evaporative cooling number E o whose magnitude is determined only by the thermophysical properties of the liquid and the atmosphere. Accordingly, the larger the magnitude of E o , the more significant the effect of the evaporative cooling, which results in stronger suppression on the evaporation rate. The classical isothermal model is recovered if the temperature gradient along the air-liquid interface is negligible ( E o = 0). For substrates with very high thermal conductivities (isothermal substrates), our analytical model predicts a reversal of temperature gradient along the droplet-free surface at a contact angle of 119°. Our findings pose interesting challenges but also guidance for experimental investigations.

  6. A physically based analytical spatial air temperature and humidity model

    Science.gov (United States)

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2013-01-01

    Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat...

  7. Validated analytical modeling of diesel engine regulated exhaust CO emission rate

    Directory of Open Access Journals (Sweden)

    Waleed F Faris

    2016-06-01

    Full Text Available Albeit vehicle analytical models are often favorable for explainable mathematical trends, no analytical model has been developed of the regulated diesel exhaust CO emission rate for trucks yet. This research unprecedentedly develops and validates for trucks a model of the steady speed regulated diesel exhaust CO emission rate analytically. It has been found that the steady speed–based CO exhaust emission rate is based on (1 CO2 dissociation, (2 the water–gas shift reaction, and (3 the incomplete combustion of hydrocarbon. It has been found as well that the steady speed–based CO exhaust emission rate based on CO2 dissociation is considerably less than the rate that is based on the water–gas shift reaction. It has also been found that the steady speed–based CO exhaust emission rate based on the water–gas shift reaction is the dominant source of CO exhaust emission. The study shows that the average percentage of deviation of the steady speed–based simulated results from the corresponding field data is 1.7% for all freeway cycles with 99% coefficient of determination at the confidence level of 95%. This deviation of the simulated results from field data outperforms its counterpart of widely recognized models such as the comprehensive modal emissions model and VT-Micro for all freeway cycles.

  8. Analytical model of tilted driver–pickup coils for eddy current nondestructive evaluation

    Science.gov (United States)

    Cao, Bing-Hua; Li, Chao; Fan, Meng-Bao; Ye, Bo; Tian, Gui-Yun

    2018-03-01

    A driver-pickup probe possesses better sensitivity and flexibility due to individual optimization of a coil. It is frequently observed in an eddy current (EC) array probe. In this work, a tilted non-coaxial driver-pickup probe above a multilayered conducting plate is analytically modeled with spatial transformation for eddy current nondestructive evaluation. Basically, the core of the formulation is to obtain the projection of magnetic vector potential (MVP) from the driver coil onto the vector along the tilted pickup coil, which is divided into two key steps. The first step is to make a projection of MVP along the pickup coil onto a horizontal plane, and the second one is to build the relationship between the projected MVP and the MVP along the driver coil. Afterwards, an analytical model for the case of a layered plate is established with the reflection and transmission theory of electromagnetic fields. The calculated values from the resulting model indicate good agreement with those from the finite element model (FEM) and experiments, which validates the developed analytical model. Project supported by the National Natural Science Foundation of China (Grant Nos. 61701500, 51677187, and 51465024).

  9. Analytical Business Model for Sustainable Distributed Retail Enterprises in a Competitive Market

    Directory of Open Access Journals (Sweden)

    Courage Matobobo

    2016-02-01

    Full Text Available Retail enterprises are organizations that sell goods in small quantities to consumers for personal consumption. In distributed retail enterprises, data is administered per branch. It is important for retail enterprises to make use of data generated within the organization to determine consumer patterns and behaviors. Large organizations find it difficult to ascertain customer preferences by merely observing transactions. This has led to quantifiable losses, such as loss of market share to competitors and targeting the wrong market. Although some enterprises have implemented classical business models to address these challenging issues, they still lack analytics-based marketing programs to gain a competitive advantage to deal with likely catastrophic events. This research develops an analytical business (ARANN model for distributed retail enterprises in a competitive market environment to address the current laxity through the best arrangement of shelf products per branch. The ARANN model is built on association rules, complemented by artificial neural networks to strengthen the results of both mutually. According to experimental analytics, the ARANN model outperforms the state of the art model, implying improved confidence in business information management within the dynamically changing world economy.

  10. CMS computing model evolution

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Colling, D; Fisk, I; Girone, M

    2014-01-01

    The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.

  11. Computational biomechanics for medicine imaging, modeling and computing

    CERN Document Server

    Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol

    2016-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  12. Enabling big geoscience data analytics with a cloud-based, MapReduce-enabled and service-oriented workflow framework.

    Directory of Open Access Journals (Sweden)

    Zhenlong Li

    Full Text Available Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA. Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.

  13. Enabling big geoscience data analytics with a cloud-based, MapReduce-enabled and service-oriented workflow framework.

    Science.gov (United States)

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.

  14. Enabling Big Geoscience Data Analytics with a Cloud-Based, MapReduce-Enabled and Service-Oriented Workflow Framework

    Science.gov (United States)

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012

  15. Analytical approach to ecophysiological forest modeling. Computer reference-information system

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.

    1994-11-01

    A great attention is directed now to problems associated with analysis and classification of the mathematical modeling methods to be applied for studying the structure and dynamics of plant (especially forest) communities, for accounting and planning of forestry activities, as well as for monitoring efforts. Among numerous models of plant objects and plant communities that differ in modeling purpose, as well as in the mechanisms and ways of description, in this paper has been selected a class of the so-called 'ecophysiological', or 'explaining', models, i.e. those involving, as variables, values with a direct ecophysiological interpretation. 69 refs

  16. Savannah River Laboratory DOSTOMAN code: a compartmental pathways computer model of contaminant transport

    International Nuclear Information System (INIS)

    King, C.M.; Wilhite, E.L.; Root, R.W. Jr.

    1985-01-01

    The Savannah River Laboratory DOSTOMAN code has been used since 1978 for environmental pathway analysis of potential migration of radionuclides and hazardous chemicals. The DOSTOMAN work is reviewed including a summary of historical use of compartmental models, the mathematical basis for the DOSTOMAN code, examples of exact analytical solutions for simple matrices, methods for numerical solution of complex matrices, and mathematical validation/calibration of the SRL code. The review includes the methodology for application to nuclear and hazardous chemical waste disposal, examples of use of the model in contaminant transport and pathway analysis, a user's guide for computer implementation, peer review of the code, and use of DOSTOMAN at other Department of Energy sites. 22 refs., 3 figs

  17. The IceCube Computing Infrastructure Model

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Besides the big LHC experiments a number of mid-size experiments is coming online which need to define new computing models to meet the demands on processing and storage requirements of those experiments. We present the hybrid computing model of IceCube which leverages GRID models with a more flexible direct user model as an example of a possible solution. In IceCube a central datacenter at UW-Madison servers as Tier-0 with a single Tier-1 datacenter at DESY Zeuthen. We describe the setup of the IceCube computing infrastructure and report on our experience in successfully provisioning the IceCube computing needs.

  18. Computer model of polycrystal structure formation of plasma sprayed Be coatings

    International Nuclear Information System (INIS)

    Tyupkina, O.G.; Meshchankin, N.V.; Sarymsakov, D.A.

    1996-01-01

    One of problems of controlled thermonuclear syntheses reactor creation is obtaining of a material, having significant radiation firmness. Perspective materials from this point of view might be ones obtained by Be plasma spraying on substrate. The analytical method of Be coating durability properties is impossible because of varied inter effective processes, taking place in crystallizing bodies, and experimental one requires significant financial spends. In the present article an attempt is made to estimate the influence of different regimes of cooling on forming polycrystal structure, to analyse dynamics of liquid coating solidifying using method of computer simulation. The research of number and sizes of grain distribution in the layers change was carried out in different regimes of cooling. For this purpose coefficient of heat exchanged was varied in the equation describing process of heat exchange between Be and substrate. Results obtained with proposed model well correspond with pattern observed in practice. Therefore a computer model of crystallization was developed, which allows to obtain characteristics of element acts of crystallization out coming from macroscopic parameters of sample, and to observe the process of melted Be solidifying

  19. Shielding computations for solution transfer lines from Analytical Lab to process cells of Demonstration Fast Reactor Plant (DFRP)

    International Nuclear Information System (INIS)

    Baskar, S.; Jose, M.T.; Baskaran, R.; Venkatraman, B.

    2018-01-01

    The diluted virgin solutions (both aqueous and organic) and aqueous analytical waste generated from experimental analysis of process solutions, pertaining to Fast Breeder Test Reactor (FBTR) and Prototype Fast Breeder Reactor (PFBR), in glove boxes of active analytical Laboratory (AAL) are pumped back to the process cells through a pipe in pipe arrangement. There are 6 transfer lines (Length 15-32 m), 2 for each type of transfer. The transfer lines passes through the area inside the AAL and also the operating area. Hence it is required to compute the necessary radial shielding requirement around the lines to limit the dose rates in both the areas to the permissible values as per the regulatory requirement

  20. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  1. Analytic models for the evolution of semilocal string networks

    International Nuclear Information System (INIS)

    Nunes, A. S.; Martins, C. J. A. P.; Avgoustidis, A.; Urrestilla, J.

    2011-01-01

    We revisit previously developed analytic models for defect evolution and adapt them appropriately for the study of semilocal string networks. We thus confirm the expectation (based on numerical simulations) that linear scaling evolution is the attractor solution for a broad range of model parameters. We discuss in detail the evolution of individual semilocal segments, focusing on the phenomenology of segment growth, and also provide a preliminary comparison with existing numerical simulations.

  2. VACET: Proposed SciDAC2 Visualization and Analytics Center for Enabling Technologies

    International Nuclear Information System (INIS)

    Bethel, W; Johnson, C; Hansen, C; Parker, S; Sanderson, A; Silva, C; Tricoche, X; Pascucci, V; Childs, H; Cohen, J; Duchaineau, M; Laney, D; Lindstrom, P; Ahern, S; Meredith, J; Ostrouchov, G; Joy, K; Hamann, B

    2006-01-01

    This project focuses on leveraging scientific visualization and analytics software technology as an enabling technology for increasing scientific productivity and insight. Advances in computational technology have resulted in an 'information big bang',' which in turn has created a significant data understanding challenge. This challenge is widely acknowledged to be one of the primary bottlenecks in contemporary science. The vision for our Center is to respond directly to that challenge by adapting, extending, creating when necessary and deploying visualization and data understanding technologies for our science stakeholders. Using an organizational model as a Visualization and Analytics Center for Enabling Technologies (VACET), we are well positioned to be responsive to the needs of a diverse set of scientific stakeholders in a coordinated fashion using a range of visualization, mathematics, statistics, computer and computational science and data management technologies

  3. Analytical and computational methodology to assess the over pressures generated by a potential catastrophic failure of a cryogenic pressure vessel

    Energy Technology Data Exchange (ETDEWEB)

    Zamora, I.; Fradera, J.; Jaskiewicz, F.; Lopez, D.; Hermosa, B.; Aleman, A.; Izquierdo, J.; Buskop, J.

    2014-07-01

    Idom has participated in the risk evaluation of Safety Important Class (SIC) structures due to over pressures generated by a catastrophic failure of a cryogenic pressure vessel at ITER plant site. The evaluation implements both analytical and computational methodologies achieving consistent and robust results. (Author)

  4. Analytical and computational methodology to assess the over pressures generated by a potential catastrophic failure of a cryogenic pressure vessel

    International Nuclear Information System (INIS)

    Zamora, I.; Fradera, J.; Jaskiewicz, F.; Lopez, D.; Hermosa, B.; Aleman, A.; Izquierdo, J.; Buskop, J.

    2014-01-01

    Idom has participated in the risk evaluation of Safety Important Class (SIC) structures due to over pressures generated by a catastrophic failure of a cryogenic pressure vessel at ITER plant site. The evaluation implements both analytical and computational methodologies achieving consistent and robust results. (Author)

  5. Symbolic computation of analytic approximate solutions for nonlinear differential equations with initial conditions

    Science.gov (United States)

    Lin, Yezhi; Liu, Yinping; Li, Zhibin

    2012-01-01

    The Adomian decomposition method (ADM) is one of the most effective methods for constructing analytic approximate solutions of nonlinear differential equations. In this paper, based on the new definition of the Adomian polynomials, and the two-step Adomian decomposition method (TSADM) combined with the Padé technique, a new algorithm is proposed to construct accurate analytic approximations of nonlinear differential equations with initial conditions. Furthermore, a MAPLE package is developed, which is user-friendly and efficient. One only needs to input a system, initial conditions and several necessary parameters, then our package will automatically deliver analytic approximate solutions within a few seconds. Several different types of examples are given to illustrate the validity of the package. Our program provides a helpful and easy-to-use tool in science and engineering to deal with initial value problems. Program summaryProgram title: NAPA Catalogue identifier: AEJZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4060 No. of bytes in distributed program, including test data, etc.: 113 498 Distribution format: tar.gz Programming language: MAPLE R13 Computer: PC Operating system: Windows XP/7 RAM: 2 Gbytes Classification: 4.3 Nature of problem: Solve nonlinear differential equations with initial conditions. Solution method: Adomian decomposition method and Padé technique. Running time: Seconds at most in routine uses of the program. Special tasks may take up to some minutes.

  6. Semi-analytic solution to planar Helmholtz equation

    Directory of Open Access Journals (Sweden)

    Tukač M.

    2013-06-01

    Full Text Available Acoustic solution of interior domains is of great interest. Solving acoustic pressure fields faster with lower computational requirements is demanded. A novel solution technique based on the analytic solution to the Helmholtz equation in rectangular domain is presented. This semi-analytic solution is compared with the finite element method, which is taken as the reference. Results show that presented method is as precise as the finite element method. As the semi-analytic method doesn’t require spatial discretization, it can be used for small and very large acoustic problems with the same computational costs.

  7. Fluid history computation methods for reactor safeguards problems using MNODE computer program

    International Nuclear Information System (INIS)

    Huang, Y.S.; Savery, C.W.

    1976-10-01

    A method for predicting the pressure-temperature histories of air, water liquid, and vapor flowing in a zoned containment as a result of high energy pipe rupture is described. The computer code, MNODE, has been developed for 12 connected control volumes and 24 inertia flow paths. Predictions by the code are compared with the results of an analytical gas dynamic problem, semiscale blowdown experiments, full scale MARVIKEN test results, Battelle-Frankfurt model PWR containment test data. The MNODE solutions to NRC/AEC subcompartment benchmark problems are also compared with results predicted by other computer codes such as RELAP-3, FLASH-2, CONTEMPT-PS. The analytical consideration is consistent with Section 6.2.1.2 of the Standard Format (Rev. 2) issued by U.S. Nuclear Regulatory Commission in September 1975

  8. IBM SPSS modeler essentials effective techniques for building powerful data mining and predictive analytics solutions

    CERN Document Server

    McCormick, Keith; Wei, Bowen

    2017-01-01

    IBM SPSS Modeler allows quick, efficient predictive analytics and insight building from your data, and is a popularly used data mining tool. This book will guide you through the data mining process, and presents relevant statistical methods which are used to build predictive models and conduct other analytic tasks using IBM SPSS Modeler. From ...

  9. Assessing the impact of large-scale computing on the size and complexity of first-principles electromagnetic models

    International Nuclear Information System (INIS)

    Miller, E.K.

    1990-01-01

    There is a growing need to determine the electromagnetic performance of increasingly complex systems at ever higher frequencies. The ideal approach would be some appropriate combination of measurement, analysis, and computation so that system design and assessment can be achieved to a needed degree of accuracy at some acceptable cost. Both measurement and computation benefit from the continuing growth in computer power that, since the early 1950s, has increased by a factor of more than a million in speed and storage. For example, a CRAY2 has an effective throughput (not the clock rate) of about 10 11 floating-point operations (FLOPs) per hour compared with the approximate 10 5 provided by the UNIVAC-1. The purpose of this discussion is to illustrate the computational complexity of modeling large (in wavelengths) electromagnetic problems. In particular the author makes the point that simply relying on faster computers for increasing the size and complexity of problems that can be modeled is less effective than might be anticipated from this raw increase in computer throughput. He suggests that rather than depending on faster computers alone, various analytical and numerical alternatives need development for reducing the overall FLOP count required to acquire the information desired. One approach is to decrease the operation count of the basic model computation itself, by reducing the order of the frequency dependence of the various numerical operations or their multiplying coefficients. Another is to decrease the number of model evaluations that are needed, an example being the number of frequency samples required to define a wideband response, by using an auxiliary model of the expected behavior. 11 refs., 5 figs., 2 tabs

  10. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  11. Analysis of thin-walled cylindrical composite shell structures subject to axial and bending loads: Concept development, analytical modeling and experimental verification

    Science.gov (United States)

    Mahadev, Sthanu

    Continued research and development efforts devoted in recent years have generated novel avenues towards the advancement of efficient and effective, slender laminated fiber-reinforced composite members. Numerous studies have focused on the modeling and response characterization of composite structures with particular relevance to thin-walled cylindrical composite shells. This class of shell configurations is being actively explored to fully determine their mechanical efficacy as primary aerospace structural members. The proposed research is targeted towards formulating a composite shell theory based prognosis methodology that entails an elaborate analysis and investigation of thin-walled cylindrical shell type laminated composite configurations that are highly desirable in increasing number of mechanical and aerospace applications. The prime motivation to adopt this theory arises from its superior ability to generate simple yet viable closed-form analytical solution procedure to numerous geometrically intense, inherent curvature possessing composite structures. This analytical evaluative routine offers to acquire a first-hand insight on the primary mechanical characteristics that essentially govern the behavior of slender composite shells under typical static loading conditions. Current work exposes the robustness of this mathematical framework via demonstrating its potential towards the prediction of structural properties such as axial stiffness and bending stiffness respectively. Longitudinal ply-stress computations are investigated upon deriving the global stiffness matrix model for composite cylindrical tubes with circular cross-sections. Additionally, this work employs a finite element based numerical technique to substantiate the analytical results reported for cylindrically shaped circular composite tubes. Furthermore, this concept development is extended to the study of thin-walled, open cross-sectioned, curved laminated shells that are geometrically

  12. Two-dimensional threshold voltage analytical model of DMG strained-silicon-on-insulator MOSFETs

    International Nuclear Information System (INIS)

    Li Jin; Liu Hongxia; Li Bin; Cao Lei; Yuan Bo

    2010-01-01

    For the first time, a simple and accurate two-dimensional analytical model for the surface potential variation along the channel in fully depleted dual-material gate strained-Si-on-insulator (DMG SSOI) MOSFETs is developed. We investigate the improved short channel effect (SCE), hot carrier effect (HCE), drain-induced barrier-lowering (DIBL) and carrier transport efficiency for the novel structure MOSFET. The analytical model takes into account the effects of different metal gate lengths, work functions, the drain bias and Ge mole fraction in the relaxed SiGe buffer. The surface potential in the channel region exhibits a step potential, which can suppress SCE, HCE and DIBL. Also, strained-Si and SOI structure can improve the carrier transport efficiency, with strained-Si being particularly effective. Further, the threshold voltage model correctly predicts a 'rollup' in threshold voltage with decreasing channel length ratios or Ge mole fraction in the relaxed SiGe buffer. The validity of the two-dimensional analytical model is verified using numerical simulations. (semiconductor devices)

  13. pyJac: Analytical Jacobian generator for chemical kinetics

    Science.gov (United States)

    Niemeyer, Kyle E.; Curtis, Nicholas J.; Sung, Chih-Jen

    2017-06-01

    Accurate simulations of combustion phenomena require the use of detailed chemical kinetics in order to capture limit phenomena such as ignition and extinction as well as predict pollutant formation. However, the chemical kinetic models for hydrocarbon fuels of practical interest typically have large numbers of species and reactions and exhibit high levels of mathematical stiffness in the governing differential equations, particularly for larger fuel molecules. In order to integrate the stiff equations governing chemical kinetics, generally reactive-flow simulations rely on implicit algorithms that require frequent Jacobian matrix evaluations. Some in situ and a posteriori computational diagnostics methods also require accurate Jacobian matrices, including computational singular perturbation and chemical explosive mode analysis. Typically, finite differences numerically approximate these, but for larger chemical kinetic models this poses significant computational demands since the number of chemical source term evaluations scales with the square of species count. Furthermore, existing analytical Jacobian tools do not optimize evaluations or support emerging SIMD processors such as GPUs. Here we introduce pyJac, a Python-based open-source program that generates analytical Jacobian matrices for use in chemical kinetics modeling and analysis. In addition to producing the necessary customized source code for evaluating reaction rates (including all modern reaction rate formulations), the chemical source terms, and the Jacobian matrix, pyJac uses an optimized evaluation order to minimize computational and memory operations. As a demonstration, we first establish the correctness of the Jacobian matrices for kinetic models of hydrogen, methane, ethylene, and isopentanol oxidation (number of species ranging 13-360) by showing agreement within 0.001% of matrices obtained via automatic differentiation. We then demonstrate the performance achievable on CPUs and GPUs using py

  14. ANALYTICAL AND SIMULATION PLANNING MODEL OF URBAN PASSENGER TRANSPORT

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available The article described the structure of the analytical and simulation models to make informed decisions in the planning of urban passenger transport. Designed UML diagram that describes the relationship of classes of the proposed model. A description of the main agents of the model developed in the simulation AnyLogic. Designed user interface integration with GIS map. Also provides simulation results that allow concluding about her health and the possibility of its use in solving planning problems of urban passenger transport.

  15. Analytic Models for Sunlight Charging of a Rapidly Spinning Satellite

    National Research Council Canada - National Science Library

    Tautz, Maurice

    2003-01-01

    ... photoelectrons can be blocked by local potential barriers. In this report, we discuss two analytic models for sunlight charging of a rapidly spinning spherical satellite, both of which are based on blocked photoelectron currents...

  16. Sierra toolkit computational mesh conceptual model

    International Nuclear Information System (INIS)

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-01-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  17. An Analytical Model for Fatigue Life Prediction Based on Fracture Mechanics and Crack Closure

    DEFF Research Database (Denmark)

    Ibsø, Jan Behrend; Agerskov, Henning

    1996-01-01

    test specimens are compared with fatigue life predictions using a fracture mechanics approach. In the calculation of the fatigue life, the influence of the welding residual stresses and crack closure on the fatigue crack growth is considered. A description of the crack closure model for analytical...... of the analytical fatigue lives. Both the analytical and experimental results obtained show that the Miner rule may give quite unconservative predictions of the fatigue life for the types of stochastic loading studied....... determination of the fatigue life is included. Furthermore, the results obtained in studies of the various parameters that have an influence on the fatigue life, are given. A very good agreement between experimental and analytical results is obtained, when the crack closure model is used in determination...

  18. An Analytical Model for Fatigue Life Prediction Based on Fracture Mechanics and Crack Closure

    DEFF Research Database (Denmark)

    Ibsø, Jan Behrend; Agerskov, Henning

    1996-01-01

    test specimens are compared with fatigue life predictions using a fracture mechanics approach. In the calculation of the fatigue life, the influence of the welding residual stresses and crack closure on the fatigue crack growth is considered. A description of the crack closure model for analytical...... determination of the fatigue life is included. Furthermore, the results obtained in studies of the various parameters that have an influence on the fatigue life, are given. A very good agreement between experimental and analytical results is obtained, when the crack closure model is used in determination...... of the analytical fatigue lives. Both the analytical and experimental results obtained show that the Miner rule may give quite unconservative predictions of the fatigue life for the types of stochastic loading studied....

  19. An analytical model for enantioseparation process in capillary electrophoresis

    Science.gov (United States)

    Ranzuglia, G. A.; Manzi, S. J.; Gomez, M. R.; Belardinelli, R. E.; Pereyra, V. D.

    2017-12-01

    An analytical model to explain the mobilities of enantiomer binary mixture in capillary electrophoresis experiment is proposed. The model consists in a set of kinetic equations describing the evolution of the populations of molecules involved in the enantioseparation process in capillary electrophoresis (CE) is proposed. These equations take into account the asymmetric driven migration of enantiomer molecules, chiral selector and the temporary diastomeric complexes, which are the products of the reversible reaction between the enantiomers and the chiral selector. The solution of these equations gives the spatial and temporal distribution of each species in the capillary, reproducing a typical signal of the electropherogram. The mobility, μ, of each specie is obtained by the position of the maximum (main peak) of their respective distributions. Thereby, the apparent electrophoretic mobility difference, Δμ, as a function of chiral selector concentration, [ C ] , can be measured. The behaviour of Δμ versus [ C ] is compared with the phenomenological model introduced by Wren and Rowe in J. Chromatography 1992, 603, 235. To test the analytical model, a capillary electrophoresis experiment for the enantiomeric separation of the (±)-chlorpheniramine β-cyclodextrin (β-CD) system is used. These data, as well as, other obtained from literature are in closed agreement with those obtained by the model. All these results are also corroborate by kinetic Monte Carlo simulation.

  20. An analytical model of leakage neutron equivalent dose for passively-scattered proton radiotherapy and validation with measurements.

    Science.gov (United States)

    Schneider, Christopher; Newhauser, Wayne; Farah, Jad

    2015-05-18

    Exposure to stray neutrons increases the risk of second cancer development after proton therapy. Previously reported analytical models of this exposure were difficult to configure and had not been investigated below 100 MeV proton energy. The purposes of this study were to test an analytical model of neutron equivalent dose per therapeutic absorbed dose  at 75 MeV and to improve the model by reducing the number of configuration parameters and making it continuous in proton energy from 100 to 250 MeV. To develop the analytical model, we used previously published H/D values in water from Monte Carlo simulations of a general-purpose beamline for proton energies from 100 to 250 MeV. We also configured and tested the model on in-air neutron equivalent doses measured for a 75 MeV ocular beamline. Predicted H/D values from the analytical model and Monte Carlo agreed well from 100 to 250 MeV (10% average difference). Predicted H/D values from the analytical model also agreed well with measurements at 75 MeV (15% average difference). The results indicate that analytical models can give fast, reliable calculations of neutron exposure after proton therapy. This ability is absent in treatment planning systems but vital to second cancer risk estimation.

  1. The Analytic Information Warehouse (AIW): a Platform for Analytics using Electronic Health Record Data

    Science.gov (United States)

    Post, Andrew R.; Kurc, Tahsin; Cholleti, Sharath; Gao, Jingjing; Lin, Xia; Bornstein, William; Cantrell, Dedra; Levine, David; Hohmann, Sam; Saltz, Joel H.

    2013-01-01

    Objective To create an analytics platform for specifying and detecting clinical phenotypes and other derived variables in electronic health record (EHR) data for quality improvement investigations. Materials and Methods We have developed an architecture for an Analytic Information Warehouse (AIW). It supports transforming data represented in different physical schemas into a common data model, specifying derived variables in terms of the common model to enable their reuse, computing derived variables while enforcing invariants and ensuring correctness and consistency of data transformations, long-term curation of derived data, and export of derived data into standard analysis tools. It includes software that implements these features and a computing environment that enables secure high-performance access to and processing of large datasets extracted from EHRs. Results We have implemented and deployed the architecture in production locally. The software is available as open source. We have used it as part of hospital operations in a project to reduce rates of hospital readmission within 30 days. The project examined the association of over 100 derived variables representing disease and co-morbidity phenotypes with readmissions in five years of data from our institution’s clinical data warehouse and the UHC Clinical Database (CDB). The CDB contains administrative data from over 200 hospitals that are in academic medical centers or affiliated with such centers. Discussion and Conclusion A widely available platform for managing and detecting phenotypes in EHR data could accelerate the use of such data in quality improvement and comparative effectiveness studies. PMID:23402960

  2. Analytical thermal modelling of multilayered active embedded chips into high density electronic board

    Directory of Open Access Journals (Sweden)

    Monier-Vinard Eric

    2013-01-01

    Full Text Available The recent Printed Wiring Board embedding technology is an attractive packaging alternative that allows a very high degree of miniaturization by stacking multiple layers of embedded chips. This disruptive technology will further increase the thermal management challenges by concentrating heat dissipation at the heart of the organic substrate structure. In order to allow the electronic designer to early analyze the limits of the power dissipation, depending on the embedded chip location inside the board, as well as the thermal interactions with other buried chips or surface mounted electronic components, an analytical thermal modelling approach was established. The presented work describes the comparison of the analytical model results with the numerical models of various embedded chips configurations. The thermal behaviour predictions of the analytical model, found to be within ±10% of relative error, demonstrate its relevance for modelling high density electronic board. Besides the approach promotes a practical solution to study the potential gain to conduct a part of heat flow from the components towards a set of localized cooled board pads.

  3. Novel analytical model for optimizing the pull-in voltage in a flexured MEMS switch incorporating beam perforation effect

    Science.gov (United States)

    Guha, K.; Laskar, N. M.; Gogoi, H. J.; Borah, A. K.; Baishnab, K. L.; Baishya, S.

    2017-11-01

    This paper presents a new method for the design, modelling and optimization of a uniform serpentine meander based MEMS shunt capacitive switch with perforation on upper beam. The new approach is proposed to improve the Pull-in Voltage performance in a MEMS switch. First a new analytical model of the Pull-in Voltage is proposed using the modified Mejis-Fokkema capacitance model taking care of the nonlinear electrostatic force, the fringing field effect due to beam thickness and etched holes on the beam simultaneously followed by the validation of same with the simulated results of benchmark full 3D FEM solver CoventorWare in a wide range of structural parameter variations. It shows a good agreement with the simulated results. Secondly, an optimization method is presented to determine the optimum configuration of switch for achieving minimum Pull-in voltage considering the proposed analytical mode as objective function. Some high performance Evolutionary Optimization Algorithms have been utilized to obtain the optimum dimensions with less computational cost and complexity. Upon comparing the applied algorithms between each other, the Dragonfly Algorithm is found to be most suitable in terms of minimum Pull-in voltage and higher convergence speed. Optimized values are validated against the simulated results of CoventorWare which shows a very satisfactory results with a small deviation of 0.223 V. In addition to these, the paper proposes, for the first time, a novel algorithmic approach for uniform arrangement of square holes in a given beam area of RF MEMS switch for perforation. The algorithm dynamically accommodates all the square holes within a given beam area such that the maximum space is utilized. This automated arrangement of perforation holes will further improve the computational complexity and design accuracy of the complex design of perforated MEMS switch.

  4. Two-dimensional analytic weighting functions for limb scattering

    Science.gov (United States)

    Zawada, D. J.; Bourassa, A. E.; Degenstein, D. A.

    2017-10-01

    Through the inversion of limb scatter measurements it is possible to obtain vertical profiles of trace species in the atmosphere. Many of these inversion methods require what is often referred to as weighting functions, or derivatives of the radiance with respect to concentrations of trace species in the atmosphere. Several radiative transfer models have implemented analytic methods to calculate weighting functions, alleviating the computational burden of traditional numerical perturbation methods. Here we describe the implementation of analytic two-dimensional weighting functions, where derivatives are calculated relative to atmospheric constituents in a two-dimensional grid of altitude and angle along the line of sight direction, in the SASKTRAN-HR radiative transfer model. Two-dimensional weighting functions are required for two-dimensional inversions of limb scatter measurements. Examples are presented where the analytic two-dimensional weighting functions are calculated with an underlying one-dimensional atmosphere. It is shown that the analytic weighting functions are more accurate than ones calculated with a single scatter approximation, and are orders of magnitude faster than a typical perturbation method. Evidence is presented that weighting functions for stratospheric aerosols calculated under a single scatter approximation may not be suitable for use in retrieval algorithms under solar backscatter conditions.

  5. Modelling computer networks

    International Nuclear Information System (INIS)

    Max, G

    2011-01-01

    Traffic models in computer networks can be described as a complicated system. These systems show non-linear features and to simulate behaviours of these systems are also difficult. Before implementing network equipments users wants to know capability of their computer network. They do not want the servers to be overloaded during temporary traffic peaks when more requests arrive than the server is designed for. As a starting point for our study a non-linear system model of network traffic is established to exam behaviour of the network planned. The paper presents setting up a non-linear simulation model that helps us to observe dataflow problems of the networks. This simple model captures the relationship between the competing traffic and the input and output dataflow. In this paper, we also focus on measuring the bottleneck of the network, which was defined as the difference between the link capacity and the competing traffic volume on the link that limits end-to-end throughput. We validate the model using measurements on a working network. The results show that the initial model estimates well main behaviours and critical parameters of the network. Based on this study, we propose to develop a new algorithm, which experimentally determines and predict the available parameters of the network modelled.

  6. Mathematical Modeling and Computational Thinking

    Science.gov (United States)

    Sanford, John F.; Naidu, Jaideep T.

    2017-01-01

    The paper argues that mathematical modeling is the essence of computational thinking. Learning a computer language is a valuable assistance in learning logical thinking but of less assistance when learning problem-solving skills. The paper is third in a series and presents some examples of mathematical modeling using spreadsheets at an advanced…

  7. Generalized analytic solutions and response characteristics of magnetotelluric fields on anisotropic infinite faults

    Science.gov (United States)

    Bing, Xue; Yicai, Ji

    2018-06-01

    In order to understand directly and analyze accurately the detected magnetotelluric (MT) data on anisotropic infinite faults, two-dimensional partial differential equations of MT fields are used to establish a model of anisotropic infinite faults using the Fourier transform method. A multi-fault model is developed to expand the one-fault model. The transverse electric mode and transverse magnetic mode analytic solutions are derived using two-infinite-fault models. The infinite integral terms of the quasi-analytic solutions are discussed. The dual-fault model is computed using the finite element method to verify the correctness of the solutions. The MT responses of isotropic and anisotropic media are calculated to analyze the response functions by different anisotropic conductivity structures. The thickness and conductivity of the media, influencing MT responses, are discussed. The analytic principles are also given. The analysis results are significant to how MT responses are perceived and to the data interpretation of the complex anisotropic infinite faults.

  8. Foam for Enhanced Oil Recovery : Modeling and Analytical Solutions

    NARCIS (Netherlands)

    Ashoori, E.

    2012-01-01

    Foam increases sweep in miscible- and immiscible-gas enhanced oil recovery by decreasing the mobility of gas enormously. This thesis is concerned with the simulations and analytical solutions for foam flow for the purpose of modeling foam EOR in a reservoir. For the ultimate goal of upscaling our

  9. Algorithms and analytical solutions for rapidly approximating long-term dispersion from line and area sources

    Science.gov (United States)

    Barrett, Steven R. H.; Britter, Rex E.

    Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean

  10. Analytical model of SiPM time resolution and order statistics with crosstalk

    International Nuclear Information System (INIS)

    Vinogradov, S.

    2015-01-01

    Time resolution is the most important parameter of photon detectors in a wide range of time-of-flight and time correlation applications within the areas of high energy physics, medical imaging, and others. Silicon photomultipliers (SiPM) have been initially recognized as perfect photon-number-resolving detectors; now they also provide outstanding results in the scintillator timing resolution. However, crosstalk and afterpulsing introduce false secondary non-Poissonian events, and SiPM time resolution models are experiencing significant difficulties with that. This study presents an attempt to develop an analytical model of the timing resolution of an SiPM taking into account statistics of secondary events resulting from a crosstalk. Two approaches have been utilized to derive an analytical expression for time resolution: the first one based on statistics of independent identically distributed detection event times and the second one based on order statistics of these times. The first approach is found to be more straightforward and “analytical-friendly” to model analog SiPMs. Comparisons of coincidence resolving times predicted by the model with the known experimental results from a LYSO:Ce scintillator and a Hamamatsu MPPC are presented

  11. Analytical model of SiPM time resolution and order statistics with crosstalk

    Energy Technology Data Exchange (ETDEWEB)

    Vinogradov, S., E-mail: Sergey.Vinogradov@liverpool.ac.uk [University of Liverpool and Cockcroft Institute, Sci-Tech Daresbury, Keckwick Lane, Warrington WA4 4AD (United Kingdom); P.N. Lebedev Physical Institute of the Russian Academy of Sciences, 119991 Leninskiy Prospekt 53, Moscow (Russian Federation)

    2015-07-01

    Time resolution is the most important parameter of photon detectors in a wide range of time-of-flight and time correlation applications within the areas of high energy physics, medical imaging, and others. Silicon photomultipliers (SiPM) have been initially recognized as perfect photon-number-resolving detectors; now they also provide outstanding results in the scintillator timing resolution. However, crosstalk and afterpulsing introduce false secondary non-Poissonian events, and SiPM time resolution models are experiencing significant difficulties with that. This study presents an attempt to develop an analytical model of the timing resolution of an SiPM taking into account statistics of secondary events resulting from a crosstalk. Two approaches have been utilized to derive an analytical expression for time resolution: the first one based on statistics of independent identically distributed detection event times and the second one based on order statistics of these times. The first approach is found to be more straightforward and “analytical-friendly” to model analog SiPMs. Comparisons of coincidence resolving times predicted by the model with the known experimental results from a LYSO:Ce scintillator and a Hamamatsu MPPC are presented.

  12. An Analytical Hierarchy Process Model for the Evaluation of College Experimental Teaching Quality

    Science.gov (United States)

    Yin, Qingli

    2013-01-01

    Taking into account the characteristics of college experimental teaching, through investigaton and analysis, evaluation indices and an Analytical Hierarchy Process (AHP) model of experimental teaching quality have been established following the analytical hierarchy process method, and the evaluation indices have been given reasonable weights. An…

  13. Addressing current challenges in cancer immunotherapy with mathematical and computational modelling.

    Science.gov (United States)

    Konstorum, Anna; Vella, Anthony T; Adler, Adam J; Laubenbacher, Reinhard C

    2017-06-01

    The goal of cancer immunotherapy is to boost a patient's immune response to a tumour. Yet, the design of an effective immunotherapy is complicated by various factors, including a potentially immunosuppressive tumour microenvironment, immune-modulating effects of conventional treatments and therapy-related toxicities. These complexities can be incorporated into mathematical and computational models of cancer immunotherapy that can then be used to aid in rational therapy design. In this review, we survey modelling approaches under the umbrella of the major challenges facing immunotherapy development, which encompass tumour classification, optimal treatment scheduling and combination therapy design. Although overlapping, each challenge has presented unique opportunities for modellers to make contributions using analytical and numerical analysis of model outcomes, as well as optimization algorithms. We discuss several examples of models that have grown in complexity as more biological information has become available, showcasing how model development is a dynamic process interlinked with the rapid advances in tumour-immune biology. We conclude the review with recommendations for modellers both with respect to methodology and biological direction that might help keep modellers at the forefront of cancer immunotherapy development. © 2017 The Author(s).

  14. Analytical and computational approaches to define the Aspergillus niger secretome

    Energy Technology Data Exchange (ETDEWEB)

    Tsang, Adrian; Butler, Gregory D.; Powlowski, Justin; Panisko, Ellen A.; Baker, Scott E.

    2009-03-01

    We used computational and mass spectrometric approaches to characterize the Aspergillus niger secretome. The 11,200 gene models predicted in the genome of A. niger strain ATCC 1015 were the data source for the analysis. Depending on the computational methods used, 691 to 881 proteins were predicted to be secreted proteins. We cultured A. niger in six different media and analyzed the extracellular proteins produced using mass spectrometry. A total of 222 proteins were identified, with 39 proteins expressed under all six conditions and 74 proteins expressed under only one condition. The secreted proteins identified by mass spectrometry were used to guide the correction of about 20 gene models. Additional analysis focused on extracellular enzymes of interest for biomass processing. Of the 63 glycoside hydrolases predicted to be capable of hydrolyzing cellulose, hemicellulose or pectin, 94% of the exo-acting enzymes and only 18% of the endo-acting enzymes were experimentally detected.

  15. An analytical model of flagellate hydrodynamics

    International Nuclear Information System (INIS)

    Dölger, Julia; Bohr, Tomas; Andersen, Anders

    2017-01-01

    Flagellates are unicellular microswimmers that propel themselves using one or several beating flagella. We consider a hydrodynamic model of flagellates and explore the effect of flagellar arrangement and beat pattern on swimming kinematics and near-cell flow. The model is based on the analytical solution by Oseen for the low Reynolds number flow due to a point force outside a no-slip sphere. The no-slip sphere represents the cell and the point force a single flagellum. By superposition we are able to model a freely swimming flagellate with several flagella. For biflagellates with left–right symmetric flagellar arrangements we determine the swimming velocity, and we show that transversal forces due to the periodic movements of the flagella can promote swimming. For a model flagellate with both a longitudinal and a transversal flagellum we determine radius and pitch of the helical swimming trajectory. We find that the longitudinal flagellum is responsible for the average translational motion whereas the transversal flagellum governs the rotational motion. Finally, we show that the transversal flagellum can lead to strong feeding currents to localized capture sites on the cell surface. (paper)

  16. Ada & the Analytical Engine.

    Science.gov (United States)

    Freeman, Elisabeth

    1996-01-01

    Presents a brief history of Ada Byron King, Countess of Lovelace, focusing on her primary role in the development of the Analytical Engine--the world's first computer. Describes the Ada Project (TAP), a centralized World Wide Web site that serves as a clearinghouse for information related to women in computing, and provides a Web address for…

  17. LHCb computing model

    CERN Document Server

    Frank, M; Pacheco, Andreu

    1998-01-01

    This document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \\Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The maximal I/O of these activities is estimated to be around 40 MB/s.We have studied three basic models concerning the placement of the CPU resources for the other computing activities, Monte Carlo-simulation (1:4 \\Theta 106 MIPS) and physics analysis (0:5 \\Theta 106 MIPS): CPU resources may either be located at the physicist's homelab, national computer centres (Regional Centres) or at CERN.The CPU resources foreseen for analysis are sufficient to allow 100 concurrent analyses. It is assumed that physicists will work in physics groups that produce analysis data at an average rate of 4.2 MB/s or 11 TB per month. However, producing these group analysis data requires reading capabilities of 660 MB/s. It is further assu...

  18. Artist - analytical RT inspection simulation tool

    International Nuclear Information System (INIS)

    Bellon, C.; Jaenisch, G.R.

    2007-01-01

    The computer simulation of radiography is applicable for different purposes in NDT such as for the qualification of NDT systems, the prediction of its reliability, the optimization of system parameters, feasibility analysis, model-based data interpretation, education and training of NDT/NDE personnel, and others. Within the framework of the integrated project FilmFree the radiographic testing (RT) simulation software developed by BAM is being further developed to meet practical requirements for inspection planning in digital industrial radiology. It combines analytical modelling of the RT inspection process with the CAD-orientated object description applicable to various industrial sectors such as power generation, railways and others. (authors)

  19. Development of Soft Computing and Applications in Agricultural and Biological Engineering

    Science.gov (United States)

    Soft computing is a set of “inexact” computing techniques, which are able to model and analyze very complex problems. For these complex problems, more conventional methods have not been able to produce cost-effective, analytical, or complete solutions. Soft computing has been extensively studied and...

  20. Numerical and analytical solutions for problems relevant for quantum computers; Numerische und analytische Loesungen fuer Quanteninformatisch-relevante Probleme

    Energy Technology Data Exchange (ETDEWEB)

    Spoerl, Andreas

    2008-06-05

    Quantum computers are one of the next technological steps in modern computer science. Some of the relevant questions that arise when it comes to the implementation of quantum operations (as building blocks in a quantum algorithm) or the simulation of quantum systems are studied. Numerical results are gathered for variety of systems, e.g. NMR systems, Josephson junctions and others. To study quantum operations (e.g. the quantum fourier transform, swap operations or multiply-controlled NOT operations) on systems containing many qubits, a parallel C++ code was developed and optimised. In addition to performing high quality operations, a closer look was given to the minimal times required to implement certain quantum operations. These times represent an interesting quantity for the experimenter as well as for the mathematician. The former tries to fight dissipative effects with fast implementations, while the latter draws conclusions in the form of analytical solutions. Dissipative effects can even be included in the optimisation. The resulting solutions are relaxation and time optimised. For systems containing 3 linearly coupled spin (1)/(2) qubits, analytical solutions are known for several problems, e.g. indirect Ising couplings and trilinear operations. A further study was made to investigate whether there exists a sufficient set of criteria to identify systems with dynamics which are invertible under local operations. Finally, a full quantum algorithm to distinguish between two knots was implemented on a spin(1)/(2) system. All operations for this experiment were calculated analytically. The experimental results coincide with the theoretical expectations. (orig.)