WorldWideScience

Sample records for methods concentrate computational

  1. Sediment acoustic index method for computing continuous suspended-sediment concentrations

    Science.gov (United States)

    Landers, Mark N.; Straub, Timothy D.; Wood, Molly S.; Domanski, Marian M.

    2016-07-11

    Suspended-sediment characteristics can be computed using acoustic indices derived from acoustic Doppler velocity meter (ADVM) backscatter data. The sediment acoustic index method applied in these types of studies can be used to more accurately and cost-effectively provide time-series estimates of suspended-sediment concentration and load, which is essential for informed solutions to many sediment-related environmental, engineering, and agricultural concerns. Advantages of this approach over other sediment surrogate methods include: (1) better representation of cross-sectional conditions from large measurement volumes, compared to other surrogate instruments that measure data at a single point; (2) high temporal resolution of collected data; (3) data integrity when biofouling is present; and (4) less rating curve hysteresis compared to streamflow as a surrogate. An additional advantage of this technique is the potential expansion of monitoring suspended-sediment concentrations at sites with existing ADVMs used in streamflow velocity monitoring. This report provides much-needed standard techniques for sediment acoustic index methods to help ensure accurate and comparable documented results.

  2. Modeling of Groundwater Resources Heavy Metals Concentration Using Soft Computing Methods: Application of Different Types of Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Meysam Alizamir

    2017-09-01

    Full Text Available Nowadays, groundwater resources play a vital role as a source of drinking water in arid and semiarid regions and forecasting of pollutants content in these resources is very important. Therefore, this study aimed to compare two soft computing methods for modeling Cd, Pb and Zn concentration in groundwater resources of Asadabad Plain, Western Iran. The relative accuracy of several soft computing models, namely multi-layer perceptron (MLP and radial basis function (RBF for forecasting of heavy metals concentration have been investigated. In addition, Levenberg-Marquardt, gradient descent and conjugate gradient training algorithms were utilized for the MLP models. The ANN models for this study were developed using MATLAB R 2014 Software program. The MLP performs better than the other models for heavy metals concentration estimation. The simulation results revealed that MLP model was able to model heavy metals concentration in groundwater resources favorably. It generally is effectively utilized in environmental applications and in the water quality estimations. In addition, out of three algorithms, Levenberg-Marquardt was better than the others were. This study proposed soft computing modeling techniques for the prediction and estimation of heavy metals concentration in groundwater resources of Asadabad Plain. Based on collected data from the plain, MLP and RBF models were developed for each heavy metal. MLP can be utilized effectively in applications of prediction of heavy metals concentration in groundwater resources of Asadabad Plain.

  3. Method for obtaining silver nanoparticle concentrations within a porous medium via synchrotron X-ray computed microtomography.

    Science.gov (United States)

    Molnar, Ian L; Willson, Clinton S; O'Carroll, Denis M; Rivers, Mark L; Gerhard, Jason I

    2014-01-21

    Attempts at understanding nanoparticle fate and transport in the subsurface environment are currently hindered by an inability to quantify nanoparticle behavior at the pore scale (within and between pores) within realistic pore networks. This paper is the first to present a method for high resolution quantification of silver nanoparticle (nAg) concentrations within porous media under controlled experimental conditions. This method makes it possible to extract silver nanoparticle concentrations within individual pores in static and quasi-dynamic (i.e., transport) systems. Quantification is achieved by employing absorption-edge synchrotron X-ray computed microtomography (SXCMT) and an extension of the Beer-Lambert law. Three-dimensional maps of X-ray mass linear attenuation are converted to SXCMT-determined nAg concentration and are found to closely match the concentrations determined by ICP analysis. In addition, factors affecting the quality of the SXCMT-determined results are investigated: 1) The acquisition of an additional above-edge data set reduced the standard deviation of SXCMT-determined concentrations; 2) X-ray refraction at the grain/water interface artificially depresses the SXCMT-determined concentrations within 18.1 μm of a grain surface; 3) By treating the approximately 20 × 10(6) voxels within each data set statistically (i.e., averaging), a high level of confidence in the SXCMT-determined mean concentrations can be obtained. This novel method provides the means to examine a wide range of properties related to nanoparticle transport in controlled laboratory porous medium experiments.

  4. Experimental and computation method for determination of burnup and isotopic composition of the WWER-440 fuel using the 134Cs and 137Cs concentrations

    International Nuclear Information System (INIS)

    Babichev, B.A.; Kozharin, V.V.

    1990-01-01

    An experimental and computational method for determination of burnup and actinoid concentrations in WWER fuel elements using 134 Cs and 137 Cs concentrations in fuel is considered. It is shown that the error in calculation of fuel burnup and U and Pu isotope concentrations in WWER-440 fuel elements is 1.3-4.9% provided that the error in 134 Cs and 137 Cs concentration measurements does not exceed 1.7 and 1.2%. 9 refs.; 10 figs.; 4 tabs

  5. Essential numerical computer methods

    CERN Document Server

    Johnson, Michael L

    2010-01-01

    The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface of the current and potential applications of computers and computer methods in biomedical research. The various chapters within this volume include a wide variety of applications that extend far beyond this limited perception. As part of the Reliable Lab Solutions series, Essential Numerical Computer Methods brings together chapters from volumes 210, 240, 321, 383, 384, 454, and 467 of Methods in Enzymology. These chapters provide ...

  6. Concentration - dose - risk computer code

    International Nuclear Information System (INIS)

    Frujinoiu, C.; Preda, M.

    1997-01-01

    Generally, the society is less willing in promoting remedial actions in case of low level chronic exposure situations. Radon in dwellings and workplaces is a case connected to chronic exposure. Apart from radon, the solely source on which the international community agreed for setting action levels, there are other numerous sources technically modified by man that can generate chronic exposure. Even if the nuclear installations are the most relevant, we are surrounded by 'man-made radioactivity' such as: mining industry, coal-fired power plants and fertilizer industry. The operating of an installation even within 'normal limits' could generate chronic exposure due to accumulation of the pollutants after a definite time. This asymptotic proclivity to a constant level define a steady-state concentration that represents a characteristic of the source's presence in the environment. The paper presents a methodology and a code package that derives sequentially the steady-state concentration, doses, detriments, as well as the costs of the effects of installation operation in a given environment. (authors)

  7. Computational algorithm for molybdenite concentrate annealing

    International Nuclear Information System (INIS)

    Alkatseva, V.M.

    1995-01-01

    Computational algorithm is presented for annealing of molybdenite concentrate with granulated return dust and that of granulated molybdenite concentrate. The algorithm differs from the known analogies for sulphide raw material annealing by including the calculation of return dust mass in stationary annealing; the latter quantity varies form the return dust mass value obtained in the first iteration step. Masses of solid products are determined by distribution of concentrate annealing products, including return dust and benthonite. The algorithm is applied to computations for annealing of other sulphide materials. 3 refs

  8. Comparing computing formulas for estimating concentration ratios

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Simpson, J.C.

    1984-03-01

    This paper provides guidance on the choice of computing formulas (estimators) for estimating concentration ratios and other ratio-type measures of radionuclides and other environmental contaminant transfers between ecosystem components. Mathematical expressions for the expected value of three commonly used estimators (arithmetic mean of ratios, geometric mean of ratios, and the ratio of means) are obtained when the multivariate lognormal distribution is assumed. These expressions are used to explain why these estimators will not in general give the same estimate of the average concentration ratio. They illustrate that the magnitude of the discrepancies depends on the magnitude of measurement biases, and on the variances and correlations associated with spatial heterogeneity and measurement errors. This paper also reports on a computer simulation study that compares the accuracy of eight computing formulas for estimating a ratio relationship that is constant over time and/or space. Statistical models appropriate for both controlled spiking experiments and observational field studies for either normal or lognormal distributions are considered. 24 references, 15 figures, 7 tables

  9. Computing Nash equilibria through computational intelligence methods

    Science.gov (United States)

    Pavlidis, N. G.; Parsopoulos, K. E.; Vrahatis, M. N.

    2005-03-01

    Nash equilibrium constitutes a central solution concept in game theory. The task of detecting the Nash equilibria of a finite strategic game remains a challenging problem up-to-date. This paper investigates the effectiveness of three computational intelligence techniques, namely, covariance matrix adaptation evolution strategies, particle swarm optimization, as well as, differential evolution, to compute Nash equilibria of finite strategic games, as global minima of a real-valued, nonnegative function. An issue of particular interest is to detect more than one Nash equilibria of a game. The performance of the considered computational intelligence methods on this problem is investigated using multistart and deflection.

  10. Method for computed tomography

    International Nuclear Information System (INIS)

    Wagner, W.

    1980-01-01

    In transversal computer tomography apparatus, in which the positioning zone in which the patient can be positioned is larger than the scanning zone in which a body slice can be scanned, reconstruction errors are liable to occur. These errors are caused by incomplete irradiation of the body during examination. They become manifest not only as an incorrect image of the area not irradiated, but also have an adverse effect on the image of the other, completely irradiated areas. The invention enables reduction of these errors

  11. Computational methods working group

    International Nuclear Information System (INIS)

    Gabriel, T.A.

    1997-09-01

    During the Cold Moderator Workshop several working groups were established including one to discuss calculational methods. The charge for this working group was to identify problems in theory, data, program execution, etc., and to suggest solutions considering both deterministic and stochastic methods including acceleration procedures.

  12. MLSOIL and DFSOIL - computer codes to estimate effective ground surface concentrations for dose computations

    International Nuclear Information System (INIS)

    Sjoreen, A.L.; Kocher, D.C.; Killough, G.G.; Miller, C.W.

    1984-11-01

    This report is a user's manual for MLSOIL (Multiple Layer SOIL model) and DFSOIL (Dose Factors for MLSOIL) and a documentation of the computational methods used in those two computer codes. MLSOIL calculates an effective ground surface concentration to be used in computations of external doses. This effective ground surface concentration is equal to (the computed dose in air from the concentration in the soil layers)/(the dose factor for computing dose in air from a plane). MLSOIL implements a five compartment linear-transfer model to calculate the concentrations of radionuclides in the soil following deposition on the ground surface from the atmosphere. The model considers leaching through the soil as well as radioactive decay and buildup. The element-specific transfer coefficients used in this model are a function of the k/sub d/ and environmental parameters. DFSOIL calculates the dose in air per unit concentration at 1 m above the ground from each of the five soil layers used in MLSOIL and the dose per unit concentration from an infinite plane source. MLSOIL and DFSOIL have been written to be part of the Computerized Radiological Risk Investigation System (CRRIS) which is designed for assessments of the health effects of airborne releases of radionuclides. 31 references, 3 figures, 4 tables

  13. Computational Methods in Medicine

    Directory of Open Access Journals (Sweden)

    Angel Garrido

    2010-01-01

    Full Text Available Artificial Intelligence requires Logic. But its Classical version shows too many insufficiencies. So, it is absolutely necessary to introduce more sophisticated tools, such as Fuzzy Logic, Modal Logic, Non-Monotonic Logic, and so on [2]. Among the things that AI needs to represent are Categories, Objects, Properties, Relations between objects, Situations, States, Time, Events, Causes and effects, Knowledge about knowledge, and so on. The problems in AI can be classified in two general types
    [3, 4], Search Problems and Representation Problem. There exist different ways to reach this objective. So, we have [3] Logics, Rules, Frames, Associative Nets, Scripts and so on, that are often interconnected. Also, it will be very useful, in dealing with problems of uncertainty and causality, to introduce Bayesian Networks and particularly, a principal tool as the Essential Graph. We attempt here to show the scope of application of such versatile methods, currently fundamental in Medicine.

  14. Numerical methods in matrix computations

    CERN Document Server

    Björck, Åke

    2015-01-01

    Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work. Åke Björck is a professor emeritus at the Department of Mathematics, Linköping University. He is a Fellow of the Society of Industrial and Applied Mathematics.

  15. Numerical computer methods part D

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The aim of this volume is to brief researchers of the importance of data analysis in enzymology, and of the modern methods that have developed concomitantly with computer hardware. It is also to validate researchers' computer programs with real and synthetic data to ascertain that the results produced are what they expected. Selected Contents: Prediction of protein structure; modeling and studying proteins with molecular dynamics; statistical error in isothermal titration calorimetry; analysis of circular dichroism data; model comparison methods.

  16. Computational Methods in Plasma Physics

    CERN Document Server

    Jardin, Stephen

    2010-01-01

    Assuming no prior knowledge of plasma physics or numerical methods, Computational Methods in Plasma Physics covers the computational mathematics and techniques needed to simulate magnetically confined plasmas in modern magnetic fusion experiments and future magnetic fusion reactors. Largely self-contained, the text presents the basic concepts necessary for the numerical solution of partial differential equations. Along with discussing numerical stability and accuracy, the author explores many of the algorithms used today in enough depth so that readers can analyze their stability, efficiency,

  17. Method of enriching oil shale concentrate

    Energy Technology Data Exchange (ETDEWEB)

    Larsson, M

    1942-02-14

    The method is characterized by producing one concentrate first and then this concentrate, in water solution using suitable apparatus, is separated by settling into one heavier bottom portion, rich in mineral matter, and into a lighter slime containing concentrated organic substance.

  18. Alignment method for parabolic trough solar concentrators

    Science.gov (United States)

    Diver, Richard B [Albuquerque, NM

    2010-02-23

    A Theoretical Overlay Photographic (TOP) alignment method uses the overlay of a theoretical projected image of a perfectly aligned concentrator on a photographic image of the concentrator to align the mirror facets of a parabolic trough solar concentrator. The alignment method is practical and straightforward, and inherently aligns the mirror facets to the receiver. When integrated with clinometer measurements for which gravity and mechanical drag effects have been accounted for and which are made in a manner and location consistent with the alignment method, all of the mirrors on a common drive can be aligned and optimized for any concentrator orientation.

  19. Computational methods in earthquake engineering

    CERN Document Server

    Plevris, Vagelis; Lagaros, Nikos

    2017-01-01

    This is the third book in a series on Computational Methods in Earthquake Engineering. The purpose of this volume is to bring together the scientific communities of Computational Mechanics and Structural Dynamics, offering a wide coverage of timely issues on contemporary Earthquake Engineering. This volume will facilitate the exchange of ideas in topics of mutual interest and can serve as a platform for establishing links between research groups with complementary activities. The computational aspects are emphasized in order to address difficult engineering problems of great social and economic importance. .

  20. Methods for computing color anaglyphs

    Science.gov (United States)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  1. Computational methods in drug discovery

    Directory of Open Access Journals (Sweden)

    Sumudu P. Leelananda

    2016-12-01

    Full Text Available The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  2. Combinatorial methods with computer applications

    CERN Document Server

    Gross, Jonathan L

    2007-01-01

    Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp

  3. New method of design of nonimaging concentrators.

    Science.gov (United States)

    Miñano, J C; González, J C

    1992-06-01

    A new method of designing nonimaging concentrators is presented and two new types of concentrators are developed. The first is an aspheric lens, and the second is a lens-mirror combination. A ray tracing of three-dimensional concentrators (with rotational symmetry) is also done, showing that the lens-mirror combination has a total transmission as high as that of the full compound parabolic concentrators, while their depth is much smaller than the classical parabolic mirror-nonimaging concentrator combinations. Another important feature of this concentrator is that the optically active surfaces are not in contact with the receiver, as occurs in other nonimaging concentrators in which the rim of the mirror coincides with the rim of the receiver.

  4. A method of uranium isotopes concentration analysis

    International Nuclear Information System (INIS)

    Lin Yuangen; Jiang Meng; Wu Changli; Duan Zhanyuan; Guo Chunying

    2010-01-01

    A basic method of uranium isotopes concentration is described in this paper. The iteration method is used to calculate the relative efficiency curve, by analyzing the characteristic γ energy spectrum of 235 U, 232 U and the daughter nuclide of 238 U, then the relative activity can be calculated, at last the uranium isotopes concentration can be worked out, and the result is validated by the experimentation. (authors)

  5. Computational methods for fluid dynamics

    CERN Document Server

    Ferziger, Joel H

    2002-01-01

    In its 3rd revised and extended edition the book offers an overview of the techniques used to solve problems in fluid mechanics on computers and describes in detail those most often used in practice. Included are advanced methods in computational fluid dynamics, like direct and large-eddy simulation of turbulence, multigrid methods, parallel computing, moving grids, structured, block-structured and unstructured boundary-fitted grids, free surface flows. The 3rd edition contains a new section dealing with grid quality and an extended description of discretization methods. The book shows common roots and basic principles for many different methods. The book also contains a great deal of practical advice for code developers and users, it is designed to be equally useful to beginners and experts. The issues of numerical accuracy, estimation and reduction of numerical errors are dealt with in detail, with many examples. A full-feature user-friendly demo-version of a commercial CFD software has been added, which ca...

  6. Numerical computer methods part E

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The contributions in this volume emphasize analysis of experimental data and analytical biochemistry, with examples taken from biochemistry. They serve to inform biomedical researchers of the modern data analysis methods that have developed concomitantly with computer hardware. Selected Contents: A practical approach to interpretation of SVD results; modeling of oscillations in endocrine networks with feedback; quantifying asynchronous breathing; sample entropy; wavelet modeling and processing of nasal airflow traces.

  7. Computational methods for stellerator configurations

    International Nuclear Information System (INIS)

    Betancourt, O.

    1992-01-01

    This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings

  8. Computational methods for molecular imaging

    CERN Document Server

    Shi, Kuangyu; Li, Shuo

    2015-01-01

    This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...

  9. Computer methods in general relativity: algebraic computing

    CERN Document Server

    Araujo, M E; Skea, J E F; Koutras, A; Krasinski, A; Hobill, D; McLenaghan, R G; Christensen, S M

    1993-01-01

    Karlhede & MacCallum [1] gave a procedure for determining the Lie algebra of the isometry group of an arbitrary pseudo-Riemannian manifold, which they intended to im- plement using the symbolic manipulation package SHEEP but never did. We have recently finished making this procedure explicit by giving an algorithm suitable for implemen- tation on a computer [2]. Specifically, we have written an algorithm for determining the isometry group of a spacetime (in four dimensions), and partially implemented this algorithm using the symbolic manipulation package CLASSI, which is an extension of SHEEP.

  10. Method of concentrating oil shale by flotation

    Energy Technology Data Exchange (ETDEWEB)

    Larsson, M

    1941-01-28

    A method is described of concentrating oil shale by flotation. It is characterized by grinding the shale to a grain size which, roughly speaking, is less than 0.06 mm. and more conveniently should be less than 0.05 mm., and followed by flotation. During the process the brown foam formed is separated as concentrate, while the black-brown to all-black foam is separated as a middle product, ground fine again, and thereafter floated once more. The patent contains five additional claims.

  11. Fast computation of the characteristics method on vector computers

    International Nuclear Information System (INIS)

    Kugo, Teruhiko

    2001-11-01

    Fast computation of the characteristics method to solve the neutron transport equation in a heterogeneous geometry has been studied. Two vector computation algorithms; an odd-even sweep (OES) method and an independent sequential sweep (ISS) method have been developed and their efficiency to a typical fuel assembly calculation has been investigated. For both methods, a vector computation is 15 times faster than a scalar computation. From a viewpoint of comparison between the OES and ISS methods, the followings are found: 1) there is a small difference in a computation speed, 2) the ISS method shows a faster convergence and 3) the ISS method saves about 80% of computer memory size compared with the OES method. It is, therefore, concluded that the ISS method is superior to the OES method as a vectorization method. In the vector computation, a table-look-up method to reduce computation time of an exponential function saves only 20% of a whole computation time. Both the coarse mesh rebalance method and the Aitken acceleration method are effective as acceleration methods for the characteristics method, a combination of them saves 70-80% of outer iterations compared with a free iteration. (author)

  12. Portable method of measuring gaseous acetone concentrations.

    Science.gov (United States)

    Worrall, Adam D; Bernstein, Jonathan A; Angelopoulos, Anastasios P

    2013-08-15

    Measurement of acetone in human breath samples has been previously shown to provide significant non-invasive diagnostic insight into the control of a patient's diabetic condition. In patients with diabetes mellitus, the body produces excess amounts of ketones such as acetone, which are then exhaled during respiration. Using various breath analysis methods has allowed for the accurate determination of acetone concentrations in exhaled breath. However, many of these methods require instrumentation and pre-concentration steps not suitable for point-of-care use. We have found that by immobilizing resorcinol reagent into a perfluorosulfonic acid polymer membrane, a controlled organic synthesis reaction occurs with acetone in a dry carrier gas. The immobilized, highly selective product of this reaction (a flavan) is found to produce a visible spectrum color change which could measure acetone concentrations to less than ppm. We here demonstrate how this approach can be used to produce a portable optical sensing device for real-time, non-invasive acetone analysis. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Computational Methods and Function Theory

    CERN Document Server

    Saff, Edward; Salinas, Luis; Varga, Richard

    1990-01-01

    The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.

  14. Elementary computer physics, a concentrated one-week course

    DEFF Research Database (Denmark)

    Christiansen, Gunnar Dan

    1978-01-01

    A concentrated one-week course (8 hours per day in 5 days) in elementary computer physics for students in their freshman university year is described. The aim of the course is to remove the constraints on traditional physics courses imposed by the necessity of only dealing with problems that have...... fields, and a lunar space vehicle are used as examples....

  15. Quantification of salt concentrations in cured pork by computed tomography

    DEFF Research Database (Denmark)

    Vestergaard, Christian Sylvest; Risum, Jørgen; Adler-Nissen, Jens

    2004-01-01

    Eight pork loin samples were mounted in Plexiglas cylinders and cured for five days. Samples were scanned by computed tomography (CT) once every 24 h. At the end of the experiment, the cylinders were cut in 1 cm sections and analyzed for chloride. From image analysis of the CT images, concentration...

  16. New or improved computational methods and advanced reactor design

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Takeda, Toshikazu; Ushio, Tadashi

    1997-01-01

    Nuclear computational method has been studied continuously up to date, as a fundamental technology supporting the nuclear development. At present, research on computational method according to new theory and the calculating method thought to be difficult to practise are also continued actively to find new development due to splendid improvement of features of computer. In Japan, many light water type reactors are now in operations, new computational methods are induced for nuclear design, and a lot of efforts are concentrated for intending to more improvement of economics and safety. In this paper, some new research results on the nuclear computational methods and their application to nuclear design of the reactor were described for introducing recent trend of the nuclear design of the reactor. 1) Advancement of the computational method, 2) Reactor core design and management of the light water reactor, and 3) Nuclear design of the fast reactor. (G.K.)

  17. Effective beam method for element concentrations

    International Nuclear Information System (INIS)

    Tolhurst, Thomas; Barbi, Mauricio; Tokaryk, Tim

    2015-01-01

    A method to evaluate chemical element concentrations in samples by generating an effective polychromatic beam using as initial input real monochromatic beam data is presented. There is a great diversity of research being conducted at synchrotron facilities around the world and a diverse set of beamlines to accommodate this research. Time is a precious commodity at synchrotron facilities; therefore, methods that can maximize the time spent collecting data are of value. At the same time the incident radiation spectrum, necessary for some research, may not be known on a given beamline. A preliminary presentation of a method applicable to X-ray fluorescence spectrocopic analyses that overcomes the lack of information about the incident beam spectrum that addresses both of these concerns is given here. The method is equally applicable for other X-ray sources so long as local conditions are considered. It relies on replacing the polychromatic spectrum in a standard fundamental parameters analysis with a set of effective monochromatic photon beams. A beam is associated with each element and can be described by an analytical function allowing extension to elements not included in the necessary calibration measurement(s)

  18. Computational methods for reversed-field equilibrium

    International Nuclear Information System (INIS)

    Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.

    1980-01-01

    Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described

  19. Prediction Methods for Blood Glucose Concentration

    DEFF Research Database (Denmark)

    “Recent Results on Glucose–Insulin Predictions by Means of a State Observer for Time-Delay Systems” by Pasquale Palumbo et al. introduces a prediction model which in real time predicts the insulin concentration in blood which in turn is used in a control system. The method is tested in simulation...... EEG signals to predict upcoming hypoglycemic situations in real-time by employing artificial neural networks. The results of a 30-day long clinical study with the implanted device and the developed algorithm are presented. The chapter “Meta-Learning Based Blood Glucose Predictor for Diabetic......, but the insulin amount is chosen using factors that account for this expectation. The increasing availability of more accurate continuous blood glucose measurement (CGM) systems is attracting much interest to the possibilities of explicit prediction of future BG values. Against this background, in 2014 a two...

  20. Prediction Methods for Blood Glucose Concentration

    DEFF Research Database (Denmark)

    -day workshop on the design, use and evaluation of prediction methods for blood glucose concentration was held at the Johannes Kepler University Linz, Austria. One intention of the workshop was to bring together experts working in various fields on the same topic, in order to shed light from different angles...... discussions which allowed to receive direct feedback from the point of view of different disciplines. This book is based on the contributions of that workshop and is intended to convey an overview of the different aspects involved in the prediction. The individual chapters are based on the presentations given...... in the process of writing this book: All authors for their individual contributions, all reviewers of the book chapters, Daniela Hummer for the entire organization of the workshop, Boris Tasevski for helping with the typesetting, Florian Reiterer for his help editing the book, as well as Oliver Jackson and Karin...

  1. Evolutionary Computing Methods for Spectral Retrieval

    Science.gov (United States)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  2. Novel methods in computational finance

    CERN Document Server

    Günther, Michael; Maten, E

    2017-01-01

    This book discusses the state-of-the-art and open problems in computational finance. It presents a collection of research outcomes and reviews of the work from the STRIKE project, an FP7 Marie Curie Initial Training Network (ITN) project in which academic partners trained early-stage researchers in close cooperation with a broader range of associated partners, including from the private sector. The aim of the project was to arrive at a deeper understanding of complex (mostly nonlinear) financial models and to develop effective and robust numerical schemes for solving linear and nonlinear problems arising from the mathematical theory of pricing financial derivatives and related financial products. This was accomplished by means of financial modelling, mathematical analysis and numerical simulations, optimal control techniques and validation of models. In recent years the computational complexity of mathematical models employed in financial mathematics has witnessed tremendous growth. Advanced numerical techni...

  3. COMPUTER METHODS OF GENETIC ANALYSIS.

    Directory of Open Access Journals (Sweden)

    A. L. Osipov

    2017-02-01

    Full Text Available The basic statistical methods used in conducting the genetic analysis of human traits. We studied by segregation analysis, linkage analysis and allelic associations. Developed software for the implementation of these methods support.

  4. Computational methods in drug discovery

    OpenAIRE

    Sumudu P. Leelananda; Steffen Lindert

    2016-01-01

    The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery project...

  5. Hybrid Monte Carlo methods in computational finance

    NARCIS (Netherlands)

    Leitao Rodriguez, A.

    2017-01-01

    Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the

  6. Advanced computational electromagnetic methods and applications

    CERN Document Server

    Li, Wenxing; Elsherbeni, Atef; Rahmat-Samii, Yahya

    2015-01-01

    This new resource covers the latest developments in computational electromagnetic methods, with emphasis on cutting-edge applications. This book is designed to extend existing literature to the latest development in computational electromagnetic methods, which are of interest to readers in both academic and industrial areas. The topics include advanced techniques in MoM, FEM and FDTD, spectral domain method, GPU and Phi hardware acceleration, metamaterials, frequency and time domain integral equations, and statistics methods in bio-electromagnetics.

  7. Computational Methods for Biomolecular Electrostatics

    Science.gov (United States)

    Dong, Feng; Olsen, Brett; Baker, Nathan A.

    2008-01-01

    An understanding of intermolecular interactions is essential for insight into how cells develop, operate, communicate and control their activities. Such interactions include several components: contributions from linear, angular, and torsional forces in covalent bonds, van der Waals forces, as well as electrostatics. Among the various components of molecular interactions, electrostatics are of special importance because of their long range and their influence on polar or charged molecules, including water, aqueous ions, and amino or nucleic acids, which are some of the primary components of living systems. Electrostatics, therefore, play important roles in determining the structure, motion and function of a wide range of biological molecules. This chapter presents a brief overview of electrostatic interactions in cellular systems with a particular focus on how computational tools can be used to investigate these types of interactions. PMID:17964951

  8. Computational methods in power system analysis

    CERN Document Server

    Idema, Reijer

    2014-01-01

    This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.

  9. Computational methods for data evaluation and assimilation

    CERN Document Server

    Cacuci, Dan Gabriel

    2013-01-01

    Data evaluation and data combination require the use of a wide range of probability theory concepts and tools, from deductive statistics mainly concerning frequencies and sample tallies to inductive inference for assimilating non-frequency data and a priori knowledge. Computational Methods for Data Evaluation and Assimilation presents interdisciplinary methods for integrating experimental and computational information. This self-contained book shows how the methods can be applied in many scientific and engineering areas. After presenting the fundamentals underlying the evaluation of experiment

  10. Method of concentrating radioactive liquid waste

    International Nuclear Information System (INIS)

    Yasumura, Keijiro

    1990-01-01

    Radioactive liquid wastes generated from nuclear power facilities are caused to flow into a vessel incorporated with first hydrophobic porous membranes. Then, the radioactive liquid wastes are passed through the first hydrophobic porous membranes under an elevated or reduced pressure to remove fine particles contained in the liquid wastes. The radioactive liquid wastes passed through the first membranes are stored in a temporary store a vessel and steams generated under heating are passed through the second hydrophobic porous membranes and then cooled and concentrated as condensates. In this case, the first and the second hydrophobic porous membranes have a property of passing steams but not water and, for example, are made of tetrafluoroethylen resin type thin membranes. Accordingly, since the fine particles can be removed by the first hydrophobic porous membranes, lowering of the concentration rate due to the deposition of solid contents to the membranes upon concentration can be prevented. (I.S.)

  11. Electromagnetic field computation by network methods

    CERN Document Server

    Felsen, Leopold B; Russer, Peter

    2009-01-01

    This monograph proposes a systematic and rigorous treatment of electromagnetic field representations in complex structures. The book presents new strong models by combining important computational methods. This is the last book of the late Leopold Felsen.

  12. Methods in computed angiotomography of the brain

    International Nuclear Information System (INIS)

    Yamamoto, Yuji; Asari, Shoji; Sadamoto, Kazuhiko.

    1985-01-01

    Authors introduce the methods in computed angiotomography of the brain. Setting of the scan planes and levels and the minimum dose bolus (MinDB) injection of contrast medium are described in detail. These methods are easily and safely employed with the use of already propagated CT scanners. Computed angiotomography is expected for clinical applications in many institutions because of its diagnostic value in screening of cerebrovascular lesions and in demonstrating the relationship between pathological lesions and cerebral vessels. (author)

  13. Methods and experimental techniques in computer engineering

    CERN Document Server

    Schiaffonati, Viola

    2014-01-01

    Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.

  14. Computed exercise plasma lactate concentrations: A conversion formula

    Directory of Open Access Journals (Sweden)

    Lia Bally

    2016-04-01

    Full Text Available Objectives: Blood lactate measurements are common as a marker of skeletal muscle metabolism in sport medicine. Due to the close equilibrium between the extracellular and intramyocellular space, plasma lactate is a more accurate estimate of muscle lactate. However, whole blood-based lactate measurements are more convenient in field use. The purpose of this investigation was therefore (1 to establish a plasma-converting lactate formula for field use, and (2 to validate the computed plasma lactate levels by comparison to a laboratory standard method. Design and methods: A total of 91 venous samples were taken from 6 individuals with type 1 diabetes during resting and exercise conditions and assessed for whole blood and plasma lactate using the YSI 2300 analyzer. A linear model was applied to establish a formula for converting whole blood lactate to plasma lactate. The validity of computed plasma lactate values was assessed by comparison to a laboratory standard method. Results: Whole blood YSI lactate could be converted to plasma YSI values (slope 1.66, intercept 0.12 for samples with normal hematocrit. Computed plasma levels compared to values determined by the laboratory standard method using Passing-Bablok regression yielded a slope of 1.03 (95%CI:0.99:1.08 with an intercept of -0.11 (95%CI:-0.18:-0.06. Conclusions: Whole blood YSI lactate values can be reliably converted into plasma values which are in line with laboratory determined plasma measurements. Keywords: Lactate, Plasma vs whole-blood, Hematocrit, Enzyme electrodes, Sample treatment

  15. Computational techniques of the simplex method

    CERN Document Server

    Maros, István

    2003-01-01

    Computational Techniques of the Simplex Method is a systematic treatment focused on the computational issues of the simplex method. It provides a comprehensive coverage of the most important and successful algorithmic and implementation techniques of the simplex method. It is a unique source of essential, never discussed details of algorithmic elements and their implementation. On the basis of the book the reader will be able to create a highly advanced implementation of the simplex method which, in turn, can be used directly or as a building block in other solution algorithms.

  16. Computational and mathematical methods in brain atlasing.

    Science.gov (United States)

    Nowinski, Wieslaw L

    2017-12-01

    Brain atlases have a wide range of use from education to research to clinical applications. Mathematical methods as well as computational methods and tools play a major role in the process of brain atlas building and developing atlas-based applications. Computational methods and tools cover three areas: dedicated editors for brain model creation, brain navigators supporting multiple platforms, and atlas-assisted specific applications. Mathematical methods in atlas building and developing atlas-aided applications deal with problems in image segmentation, geometric body modelling, physical modelling, atlas-to-scan registration, visualisation, interaction and virtual reality. Here I overview computational and mathematical methods in atlas building and developing atlas-assisted applications, and share my contribution to and experience in this field.

  17. COMPUTER MODELING OF STRUCTURAL - CONCENTRATION CHARACTERISTICS OF BUILDING COMPOSITE MATERIALS

    Directory of Open Access Journals (Sweden)

    I. I. Zaripova

    2015-09-01

    Full Text Available In the article the computer modeling of structural and concentration characteristics of the building composite material on the basis of the theory of the package. The main provisions of the algorithmon the basis of which it was possible to get the package with a significant number of packaged elements, making it more representative in comparison with existing analogues modeling. We describe the modeled area related areas, the presence of which determines the possibility of a percolation process, which in turn makes it possible to study and management of individual properties of the composite material of construction. As an example of the construction of a composite material is considered concrete that does not exclude the possibility of using algorithms and modeling results of similar studies for composite matrix type (matrix of the same material and distributed in a certain way by volume particles of another substance. Based on modeling results can be manufactured parts and construction elementsfor various purposes with improved technical characteristics (by controlling the concentration composition substance.

  18. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  19. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  20. A computational method for sharp interface advection

    DEFF Research Database (Denmark)

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volu...

  1. Computing discharge using the index velocity method

    Science.gov (United States)

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression

  2. Computational efficiency for the surface renewal method

    Science.gov (United States)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  3. Computational methods in molecular imaging technologies

    CERN Document Server

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  4. A simple flow-concentration modelling method for integrating water ...

    African Journals Online (AJOL)

    A simple flow-concentration modelling method for integrating water quality and ... flow requirements are assessed for maintenance low flow, drought low flow ... the instream concentrations of chemical constituents that will arise from different ...

  5. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  6. Zonal methods and computational fluid dynamics

    International Nuclear Information System (INIS)

    Atta, E.H.

    1985-01-01

    Recent advances in developing numerical algorithms for solving fluid flow problems, and the continuing improvement in the speed and storage of large scale computers have made it feasible to compute the flow field about complex and realistic configurations. Current solution methods involve the use of a hierarchy of mathematical models ranging from the linearized potential equation to the Navier Stokes equations. Because of the increasing complexity of both the geometries and flowfields encountered in practical fluid flow simulation, there is a growing emphasis in computational fluid dynamics on the use of zonal methods. A zonal method is one that subdivides the total flow region into interconnected smaller regions or zones. The flow solutions in these zones are then patched together to establish the global flow field solution. Zonal methods are primarily used either to limit the complexity of the governing flow equations to a localized region or to alleviate the grid generation problems about geometrically complex and multicomponent configurations. This paper surveys the application of zonal methods for solving the flow field about two and three-dimensional configurations. Various factors affecting their accuracy and ease of implementation are also discussed. From the presented review it is concluded that zonal methods promise to be very effective for computing complex flowfields and configurations. Currently there are increasing efforts to improve their efficiency, versatility, and accuracy

  7. Computing and physical methods to calculate Pu

    International Nuclear Information System (INIS)

    Mohamed, Ashraf Elsayed Mohamed

    2013-01-01

    Main limitations due to the enhancement of the plutonium content are related to the coolant void effect as the spectrum becomes faster, the neutron flux in the thermal region tends towards zero and is concentrated in the region from 10 Ke to 1 MeV. Thus, all captures by 240 Pu and 242 Pu in the thermal and epithermal resonance disappear and the 240 Pu and 242 Pu contributions to the void effect became positive. The higher the Pu content and the poorer the Pu quality, the larger the void effect. The core control in nominal or transient conditions Pu enrichment leads to a decrease in (B eff.), the efficiency of soluble boron and control rods. Also, the Doppler effect tends to decrease when Pu replaces U, so, that in case of transients the core could diverge again if the control is not effective enough. As for the voiding effect, the plutonium degradation and the 240 Pu and 242 Pu accumulation after multiple recycling lead to spectrum hardening and to a decrease in control. One solution would be to use enriched boron in soluble boron and shutdown rods. In this paper, I discuss and show the advanced computing and physical methods to calculate Pu inside the nuclear reactors and glovebox and the different solutions to be used to overcome the difficulties that effect, on safety parameters and on reactor performance, and analysis the consequences of plutonium management on the whole fuel cycle like Raw materials savings, fraction of nuclear electric power involved in the Pu management. All through two types of scenario, one involving a low fraction of the nuclear park dedicated to plutonium management, the other involving a dilution of the plutonium in all the nuclear park. (author)

  8. Domain decomposition methods and parallel computing

    International Nuclear Information System (INIS)

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  9. Computational and instrumental methods in EPR

    CERN Document Server

    Bender, Christopher J

    2006-01-01

    Computational and Instrumental Methods in EPR Prof. Bender, Fordham University Prof. Lawrence J. Berliner, University of Denver Electron magnetic resonance has been greatly facilitated by the introduction of advances in instrumentation and better computational tools, such as the increasingly widespread use of the density matrix formalism. This volume is devoted to both instrumentation and computation aspects of EPR, while addressing applications such as spin relaxation time measurements, the measurement of hyperfine interaction parameters, and the recovery of Mn(II) spin Hamiltonian parameters via spectral simulation. Key features: Microwave Amplitude Modulation Technique to Measure Spin-Lattice (T1) and Spin-Spin (T2) Relaxation Times Improvement in the Measurement of Spin-Lattice Relaxation Time in Electron Paramagnetic Resonance Quantitative Measurement of Magnetic Hyperfine Parameters and the Physical Organic Chemistry of Supramolecular Systems New Methods of Simulation of Mn(II) EPR Spectra: Single Cryst...

  10. Proceedings of computational methods in materials science

    International Nuclear Information System (INIS)

    Mark, J.E. Glicksman, M.E.; Marsh, S.P.

    1992-01-01

    The Symposium on which this volume is based was conceived as a timely expression of some of the fast-paced developments occurring throughout materials science and engineering. It focuses particularly on those involving modern computational methods applied to model and predict the response of materials under a diverse range of physico-chemical conditions. The current easy access of many materials scientists in industry, government laboratories, and academe to high-performance computers has opened many new vistas for predicting the behavior of complex materials under realistic conditions. Some have even argued that modern computational methods in materials science and engineering are literally redefining the bounds of our knowledge from which we predict structure-property relationships, perhaps forever changing the historically descriptive character of the science and much of the engineering

  11. Computational botany methods for automated species identification

    CERN Document Server

    Remagnino, Paolo; Wilkin, Paul; Cope, James; Kirkup, Don

    2017-01-01

    This book discusses innovative methods for mining information from images of plants, especially leaves, and highlights the diagnostic features that can be implemented in fully automatic systems for identifying plant species. Adopting a multidisciplinary approach, it explores the problem of plant species identification, covering both the concepts of taxonomy and morphology. It then provides an overview of morphometrics, including the historical background and the main steps in the morphometric analysis of leaves together with a number of applications. The core of the book focuses on novel diagnostic methods for plant species identification developed from a computer scientist’s perspective. It then concludes with a chapter on the characterization of botanists' visions, which highlights important cognitive aspects that can be implemented in a computer system to more accurately replicate the human expert’s fixation process. The book not only represents an authoritative guide to advanced computational tools fo...

  12. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  13. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  14. The asymptotic expansion method via symbolic computation

    OpenAIRE

    Navarro, Juan F.

    2012-01-01

    This paper describes an algorithm for implementing a perturbation method based on an asymptotic expansion of the solution to a second-order differential equation. We also introduce a new symbolic computation system which works with the so-called modified quasipolynomials, as well as an implementation of the algorithm on it.

  15. The Asymptotic Expansion Method via Symbolic Computation

    Directory of Open Access Journals (Sweden)

    Juan F. Navarro

    2012-01-01

    Full Text Available This paper describes an algorithm for implementing a perturbation method based on an asymptotic expansion of the solution to a second-order differential equation. We also introduce a new symbolic computation system which works with the so-called modified quasipolynomials, as well as an implementation of the algorithm on it.

  16. Computationally efficient methods for digital control

    NARCIS (Netherlands)

    Guerreiro Tome Antunes, D.J.; Hespanha, J.P.; Silvestre, C.J.; Kataria, N.; Brewer, F.

    2008-01-01

    The problem of designing a digital controller is considered with the novelty of explicitly taking into account the computation cost of the controller implementation. A class of controller emulation methods inspired by numerical analysis is proposed. Through various examples it is shown that these

  17. Calculation of radon concentration in water by toluene extraction method

    Energy Technology Data Exchange (ETDEWEB)

    Saito, Masaaki [Tokyo Metropolitan Isotope Research Center (Japan)

    1997-02-01

    Noguchi method and Horiuchi method have been used as the calculation method of radon concentration in water. Both methods have two problems in the original, that is, the concentration calculated is changed by the extraction temperature depend on the incorrect solubility data and the concentration calculated are smaller than the correct values, because the radon calculation equation does not true to the gas-liquid equilibrium theory. However, the two problems are solved by improving the radon equation. I presented the Noguchi-Saito equation and the constant B of Horiuchi-Saito equation. The calculating results by the improved method showed about 10% of error. (S.Y.)

  18. Advances of evolutionary computation methods and operators

    CERN Document Server

    Cuevas, Erik; Oliva Navarro, Diego Alberto

    2016-01-01

    The goal of this book is to present advances that discuss alternative Evolutionary Computation (EC) developments and non-conventional operators which have proved to be effective in the solution of several complex problems. The book has been structured so that each chapter can be read independently from the others. The book contains nine chapters with the following themes: 1) Introduction, 2) the Social Spider Optimization (SSO), 3) the States of Matter Search (SMS), 4) the collective animal behavior (CAB) algorithm, 5) the Allostatic Optimization (AO) method, 6) the Locust Search (LS) algorithm, 7) the Adaptive Population with Reduced Evaluations (APRE) method, 8) the multimodal CAB, 9) the constrained SSO method.

  19. Computational Methods in Stochastic Dynamics Volume 2

    CERN Document Server

    Stefanou, George; Papadopoulos, Vissarion

    2013-01-01

    The considerable influence of inherent uncertainties on structural behavior has led the engineering community to recognize the importance of a stochastic approach to structural problems. Issues related to uncertainty quantification and its influence on the reliability of the computational models are continuously gaining in significance. In particular, the problems of dynamic response analysis and reliability assessment of structures with uncertain system and excitation parameters have been the subject of continuous research over the last two decades as a result of the increasing availability of powerful computing resources and technology.   This book is a follow up of a previous book with the same subject (ISBN 978-90-481-9986-0) and focuses on advanced computational methods and software tools which can highly assist in tackling complex problems in stochastic dynamic/seismic analysis and design of structures. The selected chapters are authored by some of the most active scholars in their respective areas and...

  20. Computational methods for industrial radiation measurement applications

    International Nuclear Information System (INIS)

    Gardner, R.P.; Guo, P.; Ao, Q.

    1996-01-01

    Computational methods have been used with considerable success to complement radiation measurements in solving a wide range of industrial problems. The almost exponential growth of computer capability and applications in the last few years leads to a open-quotes black boxclose quotes mentality for radiation measurement applications. If a black box is defined as any radiation measurement device that is capable of measuring the parameters of interest when a wide range of operating and sample conditions may occur, then the development of computational methods for industrial radiation measurement applications should now be focused on the black box approach and the deduction of properties of interest from the response with acceptable accuracy and reasonable efficiency. Nowadays, increasingly better understanding of radiation physical processes, more accurate and complete fundamental physical data, and more advanced modeling and software/hardware techniques have made it possible to make giant strides in that direction with new ideas implemented with computer software. The Center for Engineering Applications of Radioisotopes (CEAR) at North Carolina State University has been working on a variety of projects in the area of radiation analyzers and gauges for accomplishing this for quite some time, and they are discussed here with emphasis on current accomplishments

  1. BLUES function method in computational physics

    Science.gov (United States)

    Indekeu, Joseph O.; Müller-Nedebock, Kristian K.

    2018-04-01

    We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.

  2. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  3. Computer Animation Based on Particle Methods

    Directory of Open Access Journals (Sweden)

    Rafal Wcislo

    1999-01-01

    Full Text Available The paper presents the main issues of a computer animation of a set of elastic macroscopic objects based on the particle method. The main assumption of the generated animations is to achieve very realistic movements in a scene observed on the computer display. The objects (solid bodies interact mechanically with each other, The movements and deformations of solids are calculated using the particle method. Phenomena connected with the behaviour of solids in the gravitational field, their defomtations caused by collisions and interactions with the optional liquid medium are simulated. The simulation ofthe liquid is performed using the cellular automata method. The paper presents both simulation schemes (particle method and cellular automata rules an the method of combining them in the single animation program. ln order to speed up the execution of the program the parallel version based on the network of workstation was developed. The paper describes the methods of the parallelization and it considers problems of load-balancing, collision detection, process synchronization and distributed control of the animation.

  4. Computational methods of electron/photon transport

    International Nuclear Information System (INIS)

    Mack, J.M.

    1983-01-01

    A review of computational methods simulating the non-plasma transport of electrons and their attendant cascades is presented. Remarks are mainly restricted to linearized formalisms at electron energies above 1 keV. The effectiveness of various metods is discussed including moments, point-kernel, invariant imbedding, discrete-ordinates, and Monte Carlo. Future research directions and the potential impact on various aspects of science and engineering are indicated

  5. Mathematical optics classical, quantum, and computational methods

    CERN Document Server

    Lakshminarayanan, Vasudevan

    2012-01-01

    Going beyond standard introductory texts, Mathematical Optics: Classical, Quantum, and Computational Methods brings together many new mathematical techniques from optical science and engineering research. Profusely illustrated, the book makes the material accessible to students and newcomers to the field. Divided into six parts, the text presents state-of-the-art mathematical methods and applications in classical optics, quantum optics, and image processing. Part I describes the use of phase space concepts to characterize optical beams and the application of dynamic programming in optical wave

  6. Delamination detection using methods of computational intelligence

    Science.gov (United States)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  7. An Automatic High Efficient Method for Dish Concentrator Alignment

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2014-01-01

    for the alignment of faceted solar dish concentrator. The isosceles triangle configuration of facet’s footholds determines a fixed relation between light spot displacements and foothold movements, which allows an automatic determination of the amount of adjustments. Tests on a 25 kW Stirling Energy System dish concentrator verify the feasibility, accuracy, and efficiency of our method.

  8. Method of center localization for objects containing concentric arcs

    Science.gov (United States)

    Kuznetsova, Elena G.; Shvets, Evgeny A.; Nikolaev, Dmitry P.

    2015-02-01

    This paper proposes a method for automatic center location of objects containing concentric arcs. The method utilizes structure tensor analysis and voting scheme optimized with Fast Hough Transform. Two applications of the proposed method are considered: (i) wheel tracking in video-based system for automatic vehicle classification and (ii) tree growth rings analysis on a tree cross cut image.

  9. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  10. Efficient computation method of Jacobian matrix

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1995-05-01

    As well known, the elements of the Jacobian matrix are complex trigonometric functions of the joint angles, resulting in a matrix of staggering complexity when we write it all out in one place. This article addresses that difficulties to this subject are overcome by using velocity representation. The main point is that its recursive algorithm and computer algebra technologies allow us to derive analytical formulation with no human intervention. Particularly, it is to be noted that as compared to previous results the elements are extremely simplified throughout the effective use of frame transformations. Furthermore, in case of a spherical wrist, it is shown that the present approach is computationally most efficient. Due to such advantages, the proposed method is useful in studying kinematically peculiar properties such as singularity problems. (author)

  11. Computational method for free surface hydrodynamics

    International Nuclear Information System (INIS)

    Hirt, C.W.; Nichols, B.D.

    1980-01-01

    There are numerous flow phenomena in pressure vessel and piping systems that involve the dynamics of free fluid surfaces. For example, fluid interfaces must be considered during the draining or filling of tanks, in the formation and collapse of vapor bubbles, and in seismically shaken vessels that are partially filled. To aid in the analysis of these types of flow phenomena, a new technique has been developed for the computation of complicated free-surface motions. This technique is based on the concept of a local average volume of fluid (VOF) and is embodied in a computer program for two-dimensional, transient fluid flow called SOLA-VOF. The basic approach used in the VOF technique is briefly described, and compared to other free-surface methods. Specific capabilities of the SOLA-VOF program are illustrated by generic examples of bubble growth and collapse, flows of immiscible fluid mixtures, and the confinement of spilled liquids

  12. Soft Computing Methods for Disulfide Connectivity Prediction.

    Science.gov (United States)

    Márquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jesús S

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods.

  13. Computational methods for nuclear criticality safety analysis

    International Nuclear Information System (INIS)

    Maragni, M.G.

    1992-01-01

    Nuclear criticality safety analyses require the utilization of methods which have been tested and verified against benchmarks results. In this work, criticality calculations based on the KENO-IV and MCNP codes are studied aiming the qualification of these methods at the IPEN-CNEN/SP and COPESP. The utilization of variance reduction techniques is important to reduce the computer execution time, and several of them are analysed. As practical example of the above methods, a criticality safety analysis for the storage tubes for irradiated fuel elements from the IEA-R1 research has been carried out. This analysis showed that the MCNP code is more adequate for problems with complex geometries, and the KENO-IV code shows conservative results when it is not used the generalized geometry option. (author)

  14. Computer simulations of shear thickening of concentrated dispersions

    NARCIS (Netherlands)

    Boersma, W.H.; Laven, J.; Stein, H.N.

    1995-01-01

    Stokesian dynamics computer simulations were performed on monolayers of equally sized spheres. The influence of repulsive and attractive forces on the rheological behavior and on the microstructure were studied. Under specific conditions shear thickening could be observed in the simulations, usually

  15. Computational engineering applied to the concentrating solar power technology

    International Nuclear Information System (INIS)

    Giannuzzi, Giuseppe Mauro; Miliozzi, Adio

    2006-01-01

    Solar power plants based on parabolic-trough collectors present innumerable thermo-structural problems related on the one hand to the high temperatures of the heat transfer fluid, and on the other to the need og highly precise aiming and structural resistance. Devising an engineering response to these problems implies analysing generally unconventional solutions. At present, computational engineering is the principal investigating tool; it speeds the design of prototype installations and significantly reduces the necessary but costly experimental programmes [it

  16. Concentration gradient driven molecular dynamics: a new method for simulations of membrane permeation and separation.

    Science.gov (United States)

    Ozcan, Aydin; Perego, Claudio; Salvalaglio, Matteo; Parrinello, Michele; Yazaydin, Ozgur

    2017-05-01

    In this study, we introduce a new non-equilibrium molecular dynamics simulation method to perform simulations of concentration driven membrane permeation processes. The methodology is based on the application of a non-conservative bias force controlling the concentration of species at the inlet and outlet of a membrane. We demonstrate our method for pure methane, ethane and ethylene permeation and for ethane/ethylene separation through a flexible ZIF-8 membrane. Results show that a stationary concentration gradient is maintained across the membrane, realistically simulating an out-of-equilibrium diffusive process, and the computed permeabilities and selectivity are in good agreement with experimental results.

  17. Evaluation method for change of concentration of nuclear fuel material

    International Nuclear Information System (INIS)

    Kiyono, Takeshi; Ando, Ryohei.

    1997-01-01

    The present invention provides a method of evaluating the change of concentration of compositions of nuclear fuel element materials loaded to a reactor along with neutron irradiation based on analytic calculation not relying on integration with time. Namely, the method of evaluating the change of concentration of nuclear fuel materials comprises evaluating the changing concentration of nuclear fuel materials based on nuclear fission, capturing of neutrons and radioactive decaying along with neutron irradiation. In this case, an optional nuclide on a nuclear conversion chain is determined as a standard nuclide. When the main fuel material is Pu-239, it is determined as the standard nuclide. The ratio of the concentration of the standard nuclide to that of the nuclide as an object of the evaluation can be expressed by the ratio of the cross sectional area of neutron nuclear reaction of the standard nuclide to the cross sectional area of the neutron nuclear reaction of the nuclide as the object of the evaluation. Accordingly, the concentration of the nuclide as the object of the evaluation can be expressed by an analysis formula shown by an analysis function for the ratio of the concentration of the standard nuclide to the cross section of the neutron nuclear reaction. As a result, by giving an optional concentration of the standard nuclide to the analysis formula, the concentration of each of other nuclides can be determined analytically. (I.S.)

  18. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  19. A computational method for sharp interface advection

    Science.gov (United States)

    Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619

  20. A computational method for sharp interface advection.

    Science.gov (United States)

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-11-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.

  1. Computational electromagnetic methods for transcranial magnetic stimulation

    Science.gov (United States)

    Gomez, Luis J.

    Transcranial magnetic stimulation (TMS) is a noninvasive technique used both as a research tool for cognitive neuroscience and as a FDA approved treatment for depression. During TMS, coils positioned near the scalp generate electric fields and activate targeted brain regions. In this thesis, several computational electromagnetics methods that improve the analysis, design, and uncertainty quantification of TMS systems were developed. Analysis: A new fast direct technique for solving the large and sparse linear system of equations (LSEs) arising from the finite difference (FD) discretization of Maxwell's quasi-static equations was developed. Following a factorization step, the solver permits computation of TMS fields inside realistic brain models in seconds, allowing for patient-specific real-time usage during TMS. The solver is an alternative to iterative methods for solving FD LSEs, often requiring run-times of minutes. A new integral equation (IE) method for analyzing TMS fields was developed. The human head is highly-heterogeneous and characterized by high-relative permittivities (107). IE techniques for analyzing electromagnetic interactions with such media suffer from high-contrast and low-frequency breakdowns. The novel high-permittivity and low-frequency stable internally combined volume-surface IE method developed. The method not only applies to the analysis of high-permittivity objects, but it is also the first IE tool that is stable when analyzing highly-inhomogeneous negative permittivity plasmas. Design: TMS applications call for electric fields to be sharply focused on regions that lie deep inside the brain. Unfortunately, fields generated by present-day Figure-8 coils stimulate relatively large regions near the brain surface. An optimization method for designing single feed TMS coil-arrays capable of producing more localized and deeper stimulation was developed. Results show that the coil-arrays stimulate 2.4 cm into the head while stimulating 3

  2. Application research of computational mass-transfer differential equation in MBR concentration field simulation.

    Science.gov (United States)

    Li, Chunqing; Tie, Xiaobo; Liang, Kai; Ji, Chanjuan

    2016-01-01

    After conducting the intensive research on the distribution of fluid's velocity and biochemical reactions in the membrane bioreactor (MBR), this paper introduces the use of the mass-transfer differential equation to simulate the distribution of the chemical oxygen demand (COD) concentration in MBR membrane pool. The solutions are as follows: first, use computational fluid dynamics to establish a flow control equation model of the fluid in MBR membrane pool; second, calculate this model by adopting direct numerical simulation to get the velocity field of the fluid in membrane pool; third, combine the data of velocity field to establish mass-transfer differential equation model for the concentration field in MBR membrane pool, and use Seidel iteration method to solve the equation model; last but not least, substitute the real factory data into the velocity and concentration field model to calculate simulation results, and use visualization software Tecplot to display the results. Finally by analyzing the nephogram of COD concentration distribution, it can be found that the simulation result conforms the distribution rule of the COD's concentration in real membrane pool, and the mass-transfer phenomenon can be affected by the velocity field of the fluid in membrane pool. The simulation results of this paper have certain reference value for the design optimization of the real MBR system.

  3. Computational predictive methods for fracture and fatigue

    Science.gov (United States)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-09-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  4. Modules and methods for all photonic computing

    Science.gov (United States)

    Schultz, David R.; Ma, Chao Hung

    2001-01-01

    A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

  5. Optical design teaching by computing graphic methods

    Science.gov (United States)

    Vazquez-Molini, D.; Muñoz-Luna, J.; Fernandez-Balbuena, A. A.; Garcia-Botella, A.; Belloni, P.; Alda, J.

    2012-10-01

    One of the key challenges in the teaching of Optics is that students need to know not only the math of the optical design, but also, and more important, to grasp and understand the optics in a three-dimensional space. Having a clear image of the problem to solve is the first step in order to begin to solve that problem. Therefore to achieve that the students not only must know the equation of refraction law but they have also to understand how the main parameters of this law are interacting among them. This should be a major goal in the teaching course. Optical graphic methods are a valuable tool in this way since they have the advantage of visual information and the accuracy of a computer calculation.

  6. Computed tomography shielding methods: a literature review.

    Science.gov (United States)

    Curtis, Jessica Ryann

    2010-01-01

    To investigate available shielding methods in an effort to further awareness and understanding of existing preventive measures related to patient exposure in computed tomography (CT) scanning. Searches were conducted to locate literature discussing the effectiveness of commercially available shields. Literature containing information regarding breast, gonad, eye and thyroid shielding was identified. Because of rapidly advancing technology, the selection of articles was limited to those published within the past 5 years. The selected studies were examined using the following topics as guidelines: the effectiveness of the shield (percentage of dose reduction), the shield's effect on image quality, arguments for or against its use (including practicality) and overall recommendation for its use in clinical practice. Only a limited number of studies have been performed on the use of shields for the eyes, thyroid and gonads, but the evidence shows an overall benefit to their use. Breast shielding has been the most studied shielding method, with consistent agreement throughout the literature on its effectiveness at reducing radiation dose. The effect of shielding on image quality was not remarkable in a majority of studies. Although it is noted that more studies need to be conducted regarding the impact on image quality, the currently published literature stresses the importance of shielding in reducing dose. Commercially available shields for the breast, thyroid, eyes and gonads should be implemented in clinical practice. Further research is needed to ascertain the prevalence of shielding in the clinical setting.

  7. Office-based sperm concentration: A simplified method for ...

    African Journals Online (AJOL)

    Methods: Semen samples from 51 sperm donors were used. Following swim-up separation, the sperm concentration of the retrieved motile fraction was counted, as well as progressive motile sperm using a standardised wet preparation. The number of sperm in a 10 μL droplet covered with a 22 × 22 mm coverslip was ...

  8. Effect of Concentrated Language Encounter Method in Developing ...

    African Journals Online (AJOL)

    The paper examined the effect of concentrated language encounter method in developing sight word recognition skill in primary school pupils in cross river state. The purpose of the study was to find out the effect of Primary One pupils' reading level, English sight word recognition skill. It also examine the extent to which the ...

  9. Ranking filter methods for concentrating pathogens in lake water

    Science.gov (United States)

    Accurately comparing filtration methods for concentrating waterborne pathogens is difficult because of two important water matrix effects on recovery measurements, the effect on PCR quantification and the effect on filter performance. Regarding the first effect, we show how to create a control water...

  10. Computational methods in calculating superconducting current problems

    Science.gov (United States)

    Brown, David John, II

    Various computational problems in treating superconducting currents are examined. First, field inversion in spatial Fourier transform space is reviewed to obtain both one-dimensional transport currents flowing down a long thin tape, and a localized two-dimensional current. The problems associated with spatial high-frequency noise, created by finite resolution and experimental equipment, are presented, and resolved with a smooth Gaussian cutoff in spatial frequency space. Convergence of the Green's functions for the one-dimensional transport current densities is discussed, and particular attention is devoted to the negative effects of performing discrete Fourier transforms alone on fields asymptotically dropping like 1/r. Results of imaging simulated current densities are favorably compared to the original distributions after the resulting magnetic fields undergo the imaging procedure. The behavior of high-frequency spatial noise, and the behavior of the fields with a 1/r asymptote in the imaging procedure in our simulations is analyzed, and compared to the treatment of these phenomena in the published literature. Next, we examine calculation of Mathieu and spheroidal wave functions, solutions to the wave equation in elliptical cylindrical and oblate and prolate spheroidal coordinates, respectively. These functions are also solutions to Schrodinger's equations with certain potential wells, and are useful in solving time-varying superconducting problems. The Mathieu functions are Fourier expanded, and the spheroidal functions expanded in associated Legendre polynomials to convert the defining differential equations to recursion relations. The infinite number of linear recursion equations is converted to an infinite matrix, multiplied by a vector of expansion coefficients, thus becoming an eigenvalue problem. The eigenvalue problem is solved with root solvers, and the eigenvector problem is solved using a Jacobi-type iteration method, after preconditioning the

  11. Magnetic flux concentration methods for magnetic energy harvesting module

    Directory of Open Access Journals (Sweden)

    Wakiwaka Hiroyuki

    2013-01-01

    Full Text Available This paper presents magnetic flux concentration methods for magnetic energy harvesting module. The purpose of this study is to harvest 1 mW energy with a Brooks coil 2 cm in diameter from environmental magnetic field at 60 Hz. Because the harvesting power is proportional to the square of the magnetic flux density, we consider the use of a magnetic flux concentration coil and a magnetic core. The magnetic flux concentration coil consists of an air­core Brooks coil and a resonant capacitor. When a uniform magnetic field crossed the coil, the magnetic flux distribution around the coil was changed. It is found that the magnetic field in an area is concentrated larger than 20 times compared with the uniform magnetic field. Compared with the air­core coil, our designed magnetic core makes the harvested energy ten­fold. According to ICNIRP2010 guideline, the acceptable level of magnetic field is 0.2 mT in the frequency range between 25 Hz and 400 Hz. Without the two magnetic flux concentration methods, the corresponding energy is limited to 1 µW. In contrast, our experimental results successfully demonstrate energy harvesting of 1 mW from a magnetic field of 0.03 mT at 60 Hz.

  12. Computational Studies of Protein Hydration Methods

    Science.gov (United States)

    Morozenko, Aleksandr

    It is widely appreciated that water plays a vital role in proteins' functions. The long-range proton transfer inside proteins is usually carried out by the Grotthuss mechanism and requires a chain of hydrogen bonds that is composed of internal water molecules and amino acid residues of the protein. In other cases, water molecules can facilitate the enzymes catalytic reactions by becoming a temporary proton donor/acceptor. Yet a reliable way of predicting water protein interior is still not available to the biophysics community. This thesis presents computational studies that have been performed to gain insights into the problems of fast and accurate prediction of potential water sites inside internal cavities of protein. Specifically, we focus on the task of attainment of correspondence between results obtained from computational experiments and experimental data available from X-ray structures. An overview of existing methods of predicting water molecules in the interior of a protein along with a discussion of the trustworthiness of these predictions is a second major subject of this thesis. A description of differences of water molecules in various media, particularly, gas, liquid and protein interior, and theoretical aspects of designing an adequate model of water for the protein environment are widely discussed in chapters 3 and 4. In chapter 5, we discuss recently developed methods of placement of water molecules into internal cavities of a protein. We propose a new methodology based on the principle of docking water molecules to a protein body which allows to achieve a higher degree of matching experimental data reported in protein crystal structures than other techniques available in the world of biophysical software. The new methodology is tested on a set of high-resolution crystal structures of oligopeptide-binding protein (OppA) containing a large number of resolved internal water molecules and applied to bovine heart cytochrome c oxidase in the fully

  13. Methods for obtaining a uniform volume concentration of implanted ions

    International Nuclear Information System (INIS)

    Reutov, V.F.

    1995-01-01

    Three simple practical methods of irradiations with high energy particles providing the conditions for obtaining a uniform volume concentration of the implanted ions in the massive samples are described in the present paper. Realization of the condition of two-sided irradiation of a plane sample during its rotation in the flux of the projectiles is the basis of the first method. The use of free air as a filter with varying absorbent ability due to movement of the irradiated sample along ion beam brought to the atmosphere is at the basis of the second method of uniform ion alloying. The third method for obtaining a uniform volume concentration of the implanted ions in a massive sample consists of irradiation of a sample through the absorbent filter in the shape of a foil curved according to the parabolic law moving along its surface. The first method is the most effective for obtaining a great number of the samples, for example, for mechanical tests, the second one - for irradiation in different gaseous media, and the third one - for obtaining high concentrations of the implanted ions under controlled (regulated) thermal and deformation conditions. 2 refs., 7 figs

  14. Computer prediction of subsurface radionuclide transport: an adaptive numerical method

    International Nuclear Information System (INIS)

    Neuman, S.P.

    1983-01-01

    Radionuclide transport in the subsurface is often modeled with the aid of the advection-dispersion equation. A review of existing computer methods for the solution of this equation shows that there is need for improvement. To answer this need, a new adaptive numerical method is proposed based on an Eulerian-Lagrangian formulation. The method is based on a decomposition of the concentration field into two parts, one advective and one dispersive, in a rigorous manner that does not leave room for ambiguity. The advective component of steep concentration fronts is tracked forward with the aid of moving particles clustered around each front. Away from such fronts the advection problem is handled by an efficient modified method of characteristics called single-step reverse particle tracking. When a front dissipates with time, its forward tracking stops automatically and the corresponding cloud of particles is eliminated. The dispersion problem is solved by an unconventional Lagrangian finite element formulation on a fixed grid which involves only symmetric and diagonal matrices. Preliminary tests against analytical solutions of ne- and two-dimensional dispersion in a uniform steady state velocity field suggest that the proposed adaptive method can handle the entire range of Peclet numbers from 0 to infinity, with Courant numbers well in excess of 1

  15. Method and device for measuring the smoke concentration in air

    International Nuclear Information System (INIS)

    Rennemo, B.

    1994-01-01

    The patent deals with a method and a device for measuring the smoke concentration in air. In a smoke chamber are located two electrodes, connected to a voltage source for forming a circuit in which a DC current flows. A radioactive radiation source to ionize the air molecules is located in the vicinity of the smoke chamber, so that the number of ionized air molecules which are formed is dependent upon the radiation intensity of the ion source and the concentration of smoke particles in the smoke chamber. The charging voltage will further imply that a cloud of high ion concentration is built up close to the surface of the electrodes. The ion cloud will be discharged capacitively upon a plurality of short voltages pulses applied to the electrodes to thereby result in current pulses substantially greater than the DC current flowing through the chamber. 8 figs

  16. A procedure to compute equilibrium concentrations in multicomponent systems by Gibbs energy minimization on spreadsheets

    International Nuclear Information System (INIS)

    Lima da Silva, Aline; Heck, Nestor Cesar

    2003-01-01

    Equilibrium concentrations are traditionally calculated with the help of equilibrium constant equations from selected reactions. This procedure, however, is only useful for simpler problems. Analysis of the equilibrium state in a multicomponent and multiphase system necessarily involves solution of several simultaneous equations, and, as the number of system components grows, the required computation becomes more complex and tedious. A more direct and general method for solving the problem is the direct minimization of the Gibbs energy function. The solution for the nonlinear problem consists in minimizing the objective function (Gibbs energy of the system) subjected to the constraints of the elemental mass-balance. To solve it, usually a computer code is developed, which requires considerable testing and debugging efforts. In this work, a simple method to predict equilibrium composition in multicomponent systems is presented, which makes use of an electronic spreadsheet. The ability to carry out these calculations within a spreadsheet environment shows several advantages. First, spreadsheets are available 'universally' on nearly all personal computers. Second, the input and output capabilities of spreadsheets can be effectively used to monitor calculated results. Third, no additional systems or programs need to be learned. In this way, spreadsheets can be as suitable in computing equilibrium concentrations as well as to be used as teaching and learning aids. This work describes, therefore, the use of the Solver tool, contained in the Microsoft Excel spreadsheet package, on computing equilibrium concentrations in a multicomponent system, by the method of direct Gibbs energy minimization. The four phases Fe-Cr-O-C-Ni system is used as an example to illustrate the method proposed. The pure stoichiometric phases considered in equilibrium calculations are: Cr 2 O 3 (s) and FeO C r 2 O 3 (s). The atmosphere consists of O 2 , CO e CO 2 constituents. The liquid iron

  17. On the peculiarities of LDA method in two-phase flows with high concentrations of particles

    Science.gov (United States)

    Poplavski, S. V.; Boiko, V. M.; Nesterov, A. U.

    2016-10-01

    Popular applications of laser Doppler anemometry (LDA) in gas dynamics are reviewed. It is shown that the most popular method cannot be used in supersonic flows and two-phase flows with high concentrations of particles. A new approach to implementation of the known LDA method based on direct spectral analysis, which offers better prospects for such problems, is presented. It is demonstrated that the method is suitable for gas-liquid jets. Owing to the progress in laser engineering, digital recording of spectra, and computer processing of data, the method is implemented at a higher technical level and provides new prospects of diagnostics of high-velocity dense two-phase flows.

  18. Using nuclear methods for analyzing materials and determining concentration gradients

    International Nuclear Information System (INIS)

    Darras, R.

    After reviewing the various type of nuclear chemical analysis methods, the possibilities of analysis by activation and direct observation of nuclear reactions are specifically described. These methods make it possible to effect analyses of trace-elements or impurities, even as traces, in materials, with selectivity, accuracy and great sensitivity. This latter property makes them advantageous too for determining major elements in small quantities of available matter. Furthermore, they lend themselves to carrying out superficial analyses and the determination of concentration gradients, given the careful choice of the nature and energy of the incident particles. The paper is illustrated with typical examples of analyses on steels, pure iron, refractory metals, etc [fr

  19. Method of increased bioethanol concentration with reduced heat consumption

    International Nuclear Information System (INIS)

    Bremers, G.; Blija, A.

    2003-01-01

    Ethanol dehydration applying method of non-reflux saline distillation was conduced on a laboratory scale and in bigger pilot equipment. Results make possible recommend new method for the increased of ethanol concentration. Heat consumption reduced by 50% and cooling water consumption by 90 % when the non-reflux distillation was applied. Reflux flow in the column is replacing with contact mass, which consist from saline layer and seclude medium. Basis diagram of ethanol non-reflux saline distillation was established. Distillation equipment and number of plates in the column can calculate using basis diagram. Absolute ethanol can obtain with non-reflux saline distillation. Absolute ethanol use in produce of biofuel (author)

  20. Computational methods in sequence and structure prediction

    Science.gov (United States)

    Lang, Caiyi

    This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed

  1. The active titration method for measuring local hydroxyl radical concentration

    Science.gov (United States)

    Sprengnether, Michele; Prinn, Ronald G.

    1994-01-01

    We are developing a method for measuring ambient OH by monitoring its rate of reaction with a chemical species. Our technique involves the local, instantaneous release of a mixture of saturated cyclic hydrocarbons (titrants) and perfluorocarbons (dispersants). These species must not normally be present in ambient air above the part per trillion concentration. We then track the mixture downwind using a real-time portable ECD tracer instrument. We collect air samples in canisters every few minutes for roughly one hour. We then return to the laboratory and analyze our air samples to determine the ratios of the titrant to dispersant concentrations. The trends in these ratios give us the ambient OH concentration from the relation: dlnR/dt = -k(OH). A successful measurement of OH requires that the trends in these ratios be measureable. We must not perturb ambient OH concentrations. The titrant to dispersant ratio must be spatially invariant. Finally, heterogeneous reactions of our titrant and dispersant species must be negligible relative to the titrant reaction with OH. We have conducted laboratory studies of our ability to measure the titrant to dispersant ratios as a function of concentration down to the few part per trillion concentration. We have subsequently used these results in a gaussian puff model to estimate our expected uncertainty in a field measurement of OH. Our results indicate that under a range of atmospheric conditions we expect to be able to measure OH with a sensitivity of 3x10(exp 5) cm(exp -3). In our most optimistic scenarios, we obtain a sensitivity of 1x10(exp 5) cm(exp -3). These sensitivity values reflect our anticipated ability to measure the ratio trends. However, because we are also using a rate constant to obtain our (OH) from this ratio trend, our accuracy cannot be better than that of the rate constant, which we expect to be about 20 percent.

  2. 38 CFR 4.76a - Computation of average concentric contraction of visual fields.

    Science.gov (United States)

    2010-07-01

    ... concentric contraction of visual fields. 4.76a Section 4.76a Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS SCHEDULE FOR RATING DISABILITIES Disability Ratings The Organs of Special Sense § 4.76a Computation of average concentric contraction of visual fields. Table III—Normal Visual...

  3. Computational methods for corpus annotation and analysis

    CERN Document Server

    Lu, Xiaofei

    2014-01-01

    This book reviews computational tools for lexical, syntactic, semantic, pragmatic and discourse analysis, with instructions on how to obtain, install and use each tool. Covers studies using Natural Language Processing, and offers ideas for better integration.

  4. Cloud computing methods and practical approaches

    CERN Document Server

    Mahmood, Zaigham

    2013-01-01

    This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an

  5. Computing time-series suspended-sediment concentrations and loads from in-stream turbidity-sensor and streamflow data

    Science.gov (United States)

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Doug; Ziegler, Andrew C.

    2010-01-01

    Over the last decade, use of a method for computing suspended-sediment concentration and loads using turbidity sensors—primarily nephelometry, but also optical backscatter—has proliferated. Because an in- itu turbidity sensor is capa le of measuring turbidity instantaneously, a turbidity time series can be recorded and related directly to time-varying suspended-sediment concentrations. Depending on the suspended-sediment characteristics of the measurement site, this method can be more reliable and, in many cases, a more accurate means for computing suspended-sediment concentrations and loads than traditional U.S. Geological Survey computational methods. Guidelines and procedures for estimating time s ries of suspended-sediment concentration and loading as a function of turbidity and streamflow data have been published in a U.S. Geological Survey Techniques and Methods Report, Book 3, Chapter C4. This paper is a summary of these guidelines and discusses some of the concepts, s atistical procedures, and techniques used to maintain a multiyear suspended sediment time series.

  6. Advanced Computational Methods in Bio-Mechanics.

    Science.gov (United States)

    Al Qahtani, Waleed M S; El-Anwar, Mohamed I

    2018-04-15

    A novel partnership between surgeons and machines, made possible by advances in computing and engineering technology, could overcome many of the limitations of traditional surgery. By extending surgeons' ability to plan and carry out surgical interventions more accurately and with fewer traumas, computer-integrated surgery (CIS) systems could help to improve clinical outcomes and the efficiency of healthcare delivery. CIS systems could have a similar impact on surgery to that long since realised in computer-integrated manufacturing. Mathematical modelling and computer simulation have proved tremendously successful in engineering. Computational mechanics has enabled technological developments in virtually every area of our lives. One of the greatest challenges for mechanists is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. Biomechanics has significant potential for applications in orthopaedic industry, and the performance arts since skills needed for these activities are visibly related to the human musculoskeletal and nervous systems. Although biomechanics is widely used nowadays in the orthopaedic industry to design orthopaedic implants for human joints, dental parts, external fixations and other medical purposes, numerous researches funded by billions of dollars are still running to build a new future for sports and human healthcare in what is called biomechanics era.

  7. A computer method for spectral classification

    International Nuclear Information System (INIS)

    Appenzeller, I.; Zekl, H.

    1978-01-01

    The authors describe the start of an attempt to improve the accuracy of spectroscopic parallaxes by evaluating spectroscopic temperature and luminosity criteria such as those of the MK classification spectrograms which were analyzed automatically by means of a suitable computer program. (Auth.)

  8. Computational structural biology: methods and applications

    National Research Council Canada - National Science Library

    Schwede, Torsten; Peitsch, Manuel Claude

    2008-01-01

    ... sequencing reinforced the observation that structural information is needed to understand the detailed function and mechanism of biological molecules such as enzyme reactions and molecular recognition events. Furthermore, structures are obviously key to the design of molecules with new or improved functions. In this context, computational structural biology...

  9. Bacterial contamination of platelet concentrates: pathogen detection and inactivation methods

    Directory of Open Access Journals (Sweden)

    Dana Védy

    2009-04-01

    Full Text Available Whereas the reduction of transfusion related viral transmission has been a priority during the last decade, bacterial infection transmitted by transfusion still remains associated to a high morbidity and mortality, and constitutes the most frequent infectious risk of transfusion. This problem especially concerns platelet concentrates because of their favorable bacterial growth conditions. This review gives an overview of platelet transfusion-related bacterial contamination as well as on the different strategies to reduce this problem by using either bacterial detection or inactivation methods.

  10. Determination of deuterium concentration by falling drop method

    International Nuclear Information System (INIS)

    Kawai, Hiroshi; Morishima, Hiroshige; Koga, Taeko; Niwa, Takeo; Fujii, Takashi.

    1976-01-01

    Falling drop method for determination of deuterium concentration in water sample was studied. The principle is the same as that developed by Kirshenbaum, I. in 1932. One drop of water sample falls down through a column filled with o-fluorotoluene at temperature of nearly 25 0 C. The falling time is, instead of using a stop-watch, measured with two light pulses led to a photomultiplier with mirrors, which make two pulse marks on moving chart paper. Distance between the two pulse marks is proportional to falling time. Instead of water filled double chambers of constant temperature equipped with heaters, thermostats and propellers for stirring, the column is dipped in circulating water supplied from a ''Thermoelectric'' made by ''Sharp'' company, which can circulate constant temperature water cooled or heated with thermoelements. Variation of the temperature is about 0.01 0 C. The range of deuterium concentration in our case was 20 -- 60D%. Sensitivity increased as the medium temperature decreased and as deuterium concentration of water sample increased. (auth.)

  11. Contrast timing in computed tomography: Effect of different contrast media concentrations on bolus geometry

    International Nuclear Information System (INIS)

    Mahnken, Andreas H.; Jost, Gregor; Seidensticker, Peter; Kuhl, Christiane; Pietsch, Hubertus

    2012-01-01

    Objective: To assess the effect of low-osmolar, monomeric contrast media with different iodine concentrations on bolus shape in aortic CT angiography. Materials and methods: Repeated sequential computed tomography scanning of the descending aorta of eight beagle dogs (5 male, 12.7 ± 3.1 kg) was performed without table movement with a standardized CT scan protocol. Iopromide 300 (300 mg I/mL), iopromide 370 (370 mg I/mL) and iomeprol 400 (400 mg I/mL) were administered via a foreleg vein with an identical iodine delivery rate of 1.2 g I/s and a total iodine dose of 300 mg I/kg body weight. Time-enhancement curves were computed and analyzed. Results: Iopromide 300 showed the highest peak enhancement (445.2 ± 89.1 HU), steepest up-slope (104.2 ± 17.5 HU/s) and smallest full width at half maximum (FWHM; 5.8 ± 1.0 s). Peak enhancement, duration of FWHM, enhancement at FWHM and up-slope differed significantly between iopromide 300 and iomeprol 400 (p 0.05). Conclusions: Low viscous iopromide 300 results in a better defined bolus with a significantly higher peak enhancement, steeper up-slope and smaller FWHM when compared to iomeprol 400. These characteristics potentially affect contrast timing.

  12. Soft computing methods for geoidal height transformation

    Science.gov (United States)

    Akyilmaz, O.; Özlüdemir, M. T.; Ayan, T.; Çelik, R. N.

    2009-07-01

    Soft computing techniques, such as fuzzy logic and artificial neural network (ANN) approaches, have enabled researchers to create precise models for use in many scientific and engineering applications. Applications that can be employed in geodetic studies include the estimation of earth rotation parameters and the determination of mean sea level changes. Another important field of geodesy in which these computing techniques can be applied is geoidal height transformation. We report here our use of a conventional polynomial model, the Adaptive Network-based Fuzzy (or in some publications, Adaptive Neuro-Fuzzy) Inference System (ANFIS), an ANN and a modified ANN approach to approximate geoid heights. These approximation models have been tested on a number of test points. The results obtained through the transformation processes from ellipsoidal heights into local levelling heights have also been compared.

  13. Soft Computing Methods in Design of Superalloys

    Science.gov (United States)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1996-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  14. Statistical methods and computing for big data

    Science.gov (United States)

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593

  15. Statistical methods and computing for big data.

    Science.gov (United States)

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing; Yan, Jun

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay.

  16. Greenberger-Horne-Zeilinger states-based blind quantum computation with entanglement concentration.

    Science.gov (United States)

    Zhang, Xiaoqian; Weng, Jian; Lu, Wei; Li, Xiaochun; Luo, Weiqi; Tan, Xiaoqing

    2017-09-11

    In blind quantum computation (BQC) protocol, the quantum computability of servers are complicated and powerful, while the clients are not. It is still a challenge for clients to delegate quantum computation to servers and keep the clients' inputs, outputs and algorithms private. Unfortunately, quantum channel noise is unavoidable in the practical transmission. In this paper, a novel BQC protocol based on maximally entangled Greenberger-Horne-Zeilinger (GHZ) states is proposed which doesn't need a trusted center. The protocol includes a client and two servers, where the client only needs to own quantum channels with two servers who have full-advantage quantum computers. Two servers perform entanglement concentration used to remove the noise, where the success probability can almost reach 100% in theory. But they learn nothing in the process of concentration because of the no-signaling principle, so this BQC protocol is secure and feasible.

  17. Evaluation of methods for monitoring air concentrations of hydrogen sulfide

    Directory of Open Access Journals (Sweden)

    Katarzyna Janoszka

    2013-06-01

    Full Text Available The development of different branches of industry and a growing fossil fuels mining results in a considerable emission of by-products. Major air pollutants are: CO, CO₂, SO₂, SO₃, H₂S, nitrogen oxides, as well as compounds of an organic origin. The main aspects of this paper is to review and evaluate methods used for monitoring of hydrogen sulfide in the air. Different instrumental techniques were discussed, electrochemical, chromatographic and spectrophotometric (wet and dry, to select the method most suitable for monitoring low levels of hydrogen sulfide, close to its odor threshold. Based on the literature review the method for H₂S determination in the air, involving absorption in aqueous zinc acetate and reaction with N,N-dimethylo-p-phenylodiamine and FeCl₃, has been selected and preliminary verified. The adopted method allows for routine measurements of low concentration of hydrogen sulfide, close to its odor threshold in workplaces and ambient air. Med Pr 2013;64(3:449–454

  18. Tensor network method for reversible classical computation

    Science.gov (United States)

    Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.

    2018-03-01

    We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.

  19. Advanced Computational Methods for Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  20. Computational methods for two-phase flow and particle transport

    CERN Document Server

    Lee, Wen Ho

    2013-01-01

    This book describes mathematical formulations and computational methods for solving two-phase flow problems with a computer code that calculates thermal hydraulic problems related to light water and fast breeder reactors. The physical model also handles the particle and gas flow problems that arise from coal gasification and fluidized beds. The second part of this book deals with the computational methods for particle transport.

  1. Reference depth for geostrophic computation - A new method

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.; Sastry, J.S.

    Various methods are available for the determination of reference depth for geostrophic computation. A new method based on the vertical profiles of mean and variance of the differences of mean specific volume anomaly (delta x 10) for different layers...

  2. Lattice Boltzmann method fundamentals and engineering applications with computer codes

    CERN Document Server

    Mohamad, A A

    2014-01-01

    Introducing the Lattice Boltzmann Method in a readable manner, this book provides detailed examples with complete computer codes. It avoids the most complicated mathematics and physics without scarifying the basic fundamentals of the method.

  3. An Augmented Fast Marching Method for Computing Skeletons and Centerlines

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2002-01-01

    We present a simple and robust method for computing skeletons for arbitrary planar objects and centerlines for 3D objects. We augment the Fast Marching Method (FMM) widely used in level set applications by computing the paramterized boundary location every pixel came from during the boundary

  4. Classical versus Computer Algebra Methods in Elementary Geometry

    Science.gov (United States)

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  5. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  6. A computer program integrating a multichannel analyzer with gamma analysis for the estimation of 226 Ra concentration in soil samples

    International Nuclear Information System (INIS)

    Wilson, J. E.

    1992-08-01

    A new hardware/software system has been implemented using the existing three-regions-of-interest method for determining the concentration of 226 Ra in soil samples for the Pollutant Assessment Group of the Oak Ridge National Laboratory. Consisting of a personal computer containing a multichannel analyzer, the system utilizes a new program combining the multichannel analyzer with a program analyzing gamma-radiation spectra for 226 Ra concentrations. This program uses a menu interface to minimize and simplify the tasks of system operation

  7. Computational Methods for Conformational Sampling of Biomolecules

    DEFF Research Database (Denmark)

    Bottaro, Sandro

    mathematical approach to a classic geometrical problem in protein simulations, and demonstrated its superiority compared to existing approaches. Secondly, we have constructed a more accurate implicit model of the aqueous environment, which is of fundamental importance in protein chemistry. This model......Proteins play a fundamental role in virtually every process within living organisms. For example, some proteins act as enzymes, catalyzing a wide range of reactions necessary for life, others mediate the cell interaction with the surrounding environment and still others have regulatory functions...... is computationally much faster than models where water molecules are represented explicitly. Finally, in collaboration with the group of structural bioinformatics at the Department of Biology (KU), we have applied these techniques in the context of modeling of protein structure and flexibility from low...

  8. Computational Method for Atomistic-Continuum Homogenization

    National Research Council Canada - National Science Library

    Chung, Peter

    2002-01-01

    The homogenization method is used as a framework for developing a multiscale system of equations involving atoms at zero temperature at the small scale and continuum mechanics at the very large scale...

  9. A virtual component method in numerical computation of cascades for isotope separation

    International Nuclear Information System (INIS)

    Zeng Shi; Cheng Lu

    2014-01-01

    The analysis, optimization, design and operation of cascades for isotope separation involve computations of cascades. In analytical analysis of cascades, using virtual components is a very useful analysis method. For complicated cases of cascades, numerical analysis has to be employed. However, bound up to the conventional idea that the concentration of a virtual component should be vanishingly small, virtual component is not yet applied to numerical computations. Here a method of introducing the method of using virtual components to numerical computations is elucidated, and its application to a few types of cascades is explained and tested by means of numerical experiments. The results show that the concentration of a virtual component is not restrained at all by the 'vanishingly small' idea. For the same requirements on cascades, the cascades obtained do not depend on the concentrations of virtual components. (authors)

  10. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    Science.gov (United States)

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  11. A revised method to calculate the concentration time integral of atmospheric pollutants

    International Nuclear Information System (INIS)

    Voelz, E.; Schultz, H.

    1980-01-01

    It is possible to calculate the spreading of a plume in the atmosphere under nonstationary and nonhomogeneous conditions by introducing the ''particle-in-cell'' method (PIC). This is a numerical method by which the transport of and the diffusion in the plume is reproduced in such a way, that particles representing the concentration are moved time step-wise in restricted regions (cells) and separately with the advection velocity and the diffusion velocity. This has a systematical advantage over the steady state Gaussian plume model usually used. The fixed-point concentration time integral is calculated directly instead of being substituted by the locally integrated concentration at a constant time as is done in the Gaussian model. In this way inaccuracies due to the above mentioned computational techniques may be avoided for short-time emissions, as may be seen by the fact that both integrals do not lead to the same results. Also the PIC method enables one to consider the height-dependent wind speed and its variations while the Gaussian model can be used only with averaged wind data. The concentration time integral calculated by the PIC method results in higher maximum values in shorter distances to the source. This is an effect often observed in measurements. (author)

  12. Instrument design optimization with computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Michael H. [Old Dominion Univ., Norfolk, VA (United States)

    2017-08-01

    Using Finite Element Analysis to approximate the solution of differential equations, two different instruments in experimental Hall C at the Thomas Jefferson National Accelerator Facility are analyzed. The time dependence of density uctuations from the liquid hydrogen (LH2) target used in the Qweak experiment (2011-2012) are studied with Computational Fluid Dynamics (CFD) and the simulation results compared to data from the experiment. The 2.5 kW liquid hydrogen target was the highest power LH2 target in the world and the first to be designed with CFD at Jefferson Lab. The first complete magnetic field simulation of the Super High Momentum Spectrometer (SHMS) is presented with a focus on primary electron beam deflection downstream of the target. The SHMS consists of a superconducting horizontal bending magnet (HB) and three superconducting quadrupole magnets. The HB allows particles scattered at an angle of 5:5 deg to the beam line to be steered into the quadrupole magnets which make up the optics of the spectrometer. Without mitigation, remnant fields from the SHMS may steer the unscattered beam outside of the acceptable envelope on the beam dump and limit beam operations at small scattering angles. A solution is proposed using optimal placement of a minimal amount of shielding iron around the beam line.

  13. Computer methods in physics 250 problems with guided solutions

    CERN Document Server

    Landau, Rubin H

    2018-01-01

    Our future scientists and professionals must be conversant in computational techniques. In order to facilitate integration of computer methods into existing physics courses, this textbook offers a large number of worked examples and problems with fully guided solutions in Python as well as other languages (Mathematica, Java, C, Fortran, and Maple). It’s also intended as a self-study guide for learning how to use computer methods in physics. The authors include an introductory chapter on numerical tools and indication of computational and physics difficulty level for each problem.

  14. Electromagnetic computation methods for lightning surge protection studies

    CERN Document Server

    Baba, Yoshihiro

    2016-01-01

    This book is the first to consolidate current research and to examine the theories of electromagnetic computation methods in relation to lightning surge protection. The authors introduce and compare existing electromagnetic computation methods such as the method of moments (MOM), the partial element equivalent circuit (PEEC), the finite element method (FEM), the transmission-line modeling (TLM) method, and the finite-difference time-domain (FDTD) method. The application of FDTD method to lightning protection studies is a topic that has matured through many practical applications in the past decade, and the authors explain the derivation of Maxwell's equations required by the FDTD, and modeling of various electrical components needed in computing lightning electromagnetic fields and surges with the FDTD method. The book describes the application of FDTD method to current and emerging problems of lightning surge protection of continuously more complex installations, particularly in critical infrastructures of e...

  15. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Data Based Prediction of Blood Glucose Concentrations Using Evolutionary Methods.

    Science.gov (United States)

    Hidalgo, J Ignacio; Colmenar, J Manuel; Kronberger, Gabriel; Winkler, Stephan M; Garnica, Oscar; Lanchares, Juan

    2017-08-08

    Predicting glucose values on the basis of insulin and food intakes is a difficult task that people with diabetes need to do daily. This is necessary as it is important to maintain glucose levels at appropriate values to avoid not only short-term, but also long-term complications of the illness. Artificial intelligence in general and machine learning techniques in particular have already lead to promising results in modeling and predicting glucose concentrations. In this work, several machine learning techniques are used for the modeling and prediction of glucose concentrations using as inputs the values measured by a continuous monitoring glucose system as well as also previous and estimated future carbohydrate intakes and insulin injections. In particular, we use the following four techniques: genetic programming, random forests, k-nearest neighbors, and grammatical evolution. We propose two new enhanced modeling algorithms for glucose prediction, namely (i) a variant of grammatical evolution which uses an optimized grammar, and (ii) a variant of tree-based genetic programming which uses a three-compartment model for carbohydrate and insulin dynamics. The predictors were trained and tested using data of ten patients from a public hospital in Spain. We analyze our experimental results using the Clarke error grid metric and see that 90% of the forecasts are correct (i.e., Clarke error categories A and B), but still even the best methods produce 5 to 10% of serious errors (category D) and approximately 0.5% of very serious errors (category E). We also propose an enhanced genetic programming algorithm that incorporates a three-compartment model into symbolic regression models to create smoothed time series of the original carbohydrate and insulin time series.

  17. Comparison of Five Computational Methods for Computing Q Factors in Photonic Crystal Membrane Cavities

    DEFF Research Database (Denmark)

    Novitsky, Andrey; de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn

    2017-01-01

    Five state-of-the-art computational methods are benchmarked by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities. The convergence of the methods with respect to resolution, degrees of freedom and number of modes is investigated. Specia...

  18. Computational Methods for Physicists Compendium for Students

    CERN Document Server

    Sirca, Simon

    2012-01-01

    This book helps advanced undergraduate, graduate and postdoctoral students in their daily work by offering them a compendium of numerical methods. The choice of methods pays  significant attention to error estimates, stability and convergence issues as well as to the ways to optimize program execution speeds. Many examples are given throughout the chapters, and each chapter is followed by at least a handful of more comprehensive problems which may be dealt with, for example, on a weekly basis in a one- or two-semester course. In these end-of-chapter problems the physics background is pronounced, and the main text preceding them is intended as an introduction or as a later reference. Less stress is given to the explanation of individual algorithms. It is tried to induce in the reader an own independent thinking and a certain amount of scepticism and scrutiny instead of blindly following readily available commercial tools.

  19. Measurement method of cardiac computed tomography (CT)

    International Nuclear Information System (INIS)

    Watanabe, Shigeru; Yamamoto, Hironori; Yumura, Yasuo; Yoshida, Hideo; Morooka, Nobuhiro

    1980-01-01

    The CT was carried out in 126 cases consisting of 31 normals, 17 cases of mitral stenosis (MS), 8 cases of mitral regurgitation (MR), 11 cases of aortic stenosis (AS), 9 cases of aortic regurgitation (AR), 20 cases of myocardial infarction (MI), 8 cases of atrial septal defect (ASD) and 22 hypertensives. The 20-second scans were performed every 1.5 cm from the 2nd intercostal space to the 5th or 6th intercostal space. The computed tomograms obtained were classified into 8 levels by cross-sectional anatomy; levels of (1) the aortic arch, (2) just beneath the aortic arch, (3) the pulmonary artery bifurcation, (4) the right atrial appendage or the upper right atrium, (5) the aortic root, (6) the upper left ventricle, (7) the mid left ventricle, and (8) the lower left ventricle. The diameter (anteroposterior and transverse) and cross-sectional area were measured about ascending aorta (Ao), descending aorta (AoD), superior vena cava (SVC), inferoir vena cava (IVC), pulmonary artery branch (PA), main pulmonary artery (mPA), left atrium (LA), right atrium (RA), and right ventricular outflow tract (RVOT) on each level where they were clearly distinguished. However, it was difficult to separate cardiac wall from cardiac cavity because there was little difference of X-ray attenuation coefficient between the myocardium and blood. Therefore, on mid ventricular level, diameter and area about total cardiac shadow were measured, and then cardiac ratios to the thorax were respectively calculated. The normal range of their values was shown in table, and abnormal characteristics in cardiac disease were exhibited in comparison with normal values. In MS, diameter and area in LA were significantly larger than normal. In MS and ASD, all the right cardiac system were larger than normal, especially, RA and SVC in MS, PA and RVOT in ASD. The diameter and area of the aortic root was larger in the order of AR, AS and HT than normal. (author)

  20. Concentrator optical characterization using computer mathematical modelling and point source testing

    Science.gov (United States)

    Dennison, E. W.; John, S. L.; Trentelman, G. F.

    1984-01-01

    The optical characteristics of a paraboloidal solar concentrator are analyzed using the intercept factor curve (a format for image data) to describe the results of a mathematical model and to represent reduced data from experimental testing. This procedure makes it possible not only to test an assembled concentrator, but also to evaluate single optical panels or to conduct non-solar tests of an assembled concentrator. The use of three-dimensional ray tracing computer programs to calculate the mathematical model is described. These ray tracing programs can include any type of optical configuration from simple paraboloids to array of spherical facets and can be adapted to microcomputers or larger computers, which can graphically display real-time comparison of calculated and measured data.

  1. Three numerical methods for the computation of the electrostatic energy

    International Nuclear Information System (INIS)

    Poenaru, D.N.; Galeriu, D.

    1975-01-01

    The FORTRAN programs for computation of the electrostatic energy of a body with axial symmetry by Lawrence, Hill-Wheeler and Beringer methods are presented in detail. The accuracy, time of computation and the required memory of these methods are tested at various deformations for two simple parametrisations: two overlapping identical spheres and a spheroid. On this basis the field of application of each method is recomended

  2. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  3. Testing and Validation of Computational Methods for Mass Spectrometry.

    Science.gov (United States)

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  4. EXTRAN: A computer code for estimating concentrations of toxic substances at control room air intakes

    International Nuclear Information System (INIS)

    Ramsdell, J.V.

    1991-03-01

    This report presents the NRC staff with a tool for assessing the potential effects of accidental releases of radioactive materials and toxic substances on habitability of nuclear facility control rooms. The tool is a computer code that estimates concentrations at nuclear facility control room air intakes given information about the release and the environmental conditions. The name of the computer code is EXTRAN. EXTRAN combines procedures for estimating the amount of airborne material, a Gaussian puff dispersion model, and the most recent algorithms for estimating diffusion coefficients in building wakes. It is a modular computer code, written in FORTRAN-77, that runs on personal computers. It uses a math coprocessor, if present, but does not require one. Code output may be directed to a printer or disk files. 25 refs., 8 figs., 4 tabs

  5. Developing a multimodal biometric authentication system using soft computing methods.

    Science.gov (United States)

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.

  6. Computational Simulations and the Scientific Method

    Science.gov (United States)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  7. Computer systems and methods for visualizing data

    Science.gov (United States)

    Stolte, Chris; Hanrahan, Patrick

    2013-01-29

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  8. Computer program for Scatchard analysis of protein: Ligand interaction - use for determination of soluble and nuclear steroid receptor concentrations

    International Nuclear Information System (INIS)

    Leake, R.; Cowan, S.; Eason, R.

    1998-01-01

    Steroid receptor concentration may be determined routinely in biopsy samples of breast and endometrial cancer by the competition method. This method yields data for both the soluble and nuclear fractions of the tissue. The data are usually subject to Scatchard analysis. This Appendix describes a computer program written initially for a PDP-11. It has been modified for use with IBM, Apple Macintosh and BBC microcomputers. The nature of the correction for competition is described and examples of the printout are given. The program is flexible and its use for different receptors is explained. The program can be readily adapted to other assays in which Scatchard analysis is appropriate

  9. Control rod computer code IAMCOS: general theory and numerical methods

    International Nuclear Information System (INIS)

    West, G.

    1982-11-01

    IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr

  10. COMPLEX OF NUMERICAL MODELS FOR COMPUTATION OF AIR ION CONCENTRATION IN PREMISES

    Directory of Open Access Journals (Sweden)

    M. M. Biliaiev

    2016-04-01

    Full Text Available Purpose. The article highlights the question about creation the complex numerical models in order to calculate the ions concentration fields in premises of various purpose and in work areas. Developed complex should take into account the main physical factors influencing the formation of the concentration field of ions, that is, aerodynamics of air jets in the room, presence of furniture, equipment, placement of ventilation holes, ventilation mode, location of ionization sources, transfer of ions under the electric field effect, other factors, determining the intensity and shape of the field of concentration of ions. In addition, complex of numerical models has to ensure conducting of the express calculation of the ions concentration in the premises, allowing quick sorting of possible variants and enabling «enlarged» evaluation of air ions concentration in the premises. Methodology. The complex numerical models to calculate air ion regime in the premises is developed. CFD numerical model is based on the use of aerodynamics, electrostatics and mass transfer equations, and takes into account the effect of air flows caused by the ventilation operation, diffusion, electric field effects, as well as the interaction of different polarities ions with each other and with the dust particles. The proposed balance model for computation of air ion regime indoors allows operative calculating the ions concentration field considering pulsed operation of the ionizer. Findings. The calculated data are received, on the basis of which one can estimate the ions concentration anywhere in the premises with artificial air ionization. An example of calculating the negative ions concentration on the basis of the CFD numerical model in the premises with reengineering transformations is given. On the basis of the developed balance model the air ions concentration in the room volume was calculated. Originality. Results of the air ion regime computation in premise, which

  11. Computation of saddle-type slow manifolds using iterative methods

    DEFF Research Database (Denmark)

    Kristiansen, Kristian Uldall

    2015-01-01

    with respect to , appropriate estimates are directly attainable using the method of this paper. The method is applied to several examples, including a model for a pair of neurons coupled by reciprocal inhibition with two slow and two fast variables, and the computation of homoclinic connections in the Fitz......This paper presents an alternative approach for the computation of trajectory segments on slow manifolds of saddle type. This approach is based on iterative methods rather than collocation-type methods. Compared to collocation methods, which require mesh refinements to ensure uniform convergence...

  12. Discrete linear canonical transform computation by adaptive method.

    Science.gov (United States)

    Zhang, Feng; Tao, Ran; Wang, Yue

    2013-07-29

    The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.

  13. Platform-independent method for computer aided schematic drawings

    Science.gov (United States)

    Vell, Jeffrey L [Slingerlands, NY; Siganporia, Darius M [Clifton Park, NY; Levy, Arthur J [Fort Lauderdale, FL

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  14. Image cytometer method for automated assessment of human spermatozoa concentration

    DEFF Research Database (Denmark)

    Egeberg, D L; Kjaerulff, S; Hansen, C

    2013-01-01

    In the basic clinical work-up of infertile couples, a semen analysis is mandatory and the sperm concentration is one of the most essential variables to be determined. Sperm concentration is usually assessed by manual counting using a haemocytometer and is hence labour intensive and may be subject...

  15. Simulating elastic light scattering using high performance computing methods

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  16. Computational and experimental methods for enclosed natural convection

    International Nuclear Information System (INIS)

    Larson, D.W.; Gartling, D.K.; Schimmel, W.P. Jr.

    1977-10-01

    Two computational procedures and one optical experimental procedure for studying enclosed natural convection are described. The finite-difference and finite-element numerical methods are developed and several sample problems are solved. Results obtained from the two computational approaches are compared. A temperature-visualization scheme using laser holographic interferometry is described, and results from this experimental procedure are compared with results from both numerical methods

  17. Method and computer program product for maintenance and modernization backlogging

    Science.gov (United States)

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  18. Computer Anti-forensics Methods and their Impact on Computer Forensic Investigation

    OpenAIRE

    Pajek, Przemyslaw; Pimenidis, Elias

    2009-01-01

    Electronic crime is very difficult to investigate and prosecute, mainly\\ud due to the fact that investigators have to build their cases based on artefacts left\\ud on computer systems. Nowadays, computer criminals are aware of computer forensics\\ud methods and techniques and try to use countermeasure techniques to efficiently\\ud impede the investigation processes. In many cases investigation with\\ud such countermeasure techniques in place appears to be too expensive, or too\\ud time consuming t...

  19. Fibonacci’s Computation Methods vs Modern Algorithms

    Directory of Open Access Journals (Sweden)

    Ernesto Burattini

    2013-12-01

    Full Text Available In this paper we discuss some computational procedures given by Leonardo Pisano Fibonacci in his famous Liber Abaci book, and we propose their translation into a modern language for computers (C ++. Among the other we describe the method of “cross” multiplication, we evaluate its computational complexity in algorithmic terms and we show the output of a C ++ code that describes the development of the method applied to the product of two integers. In a similar way we show the operations performed on fractions introduced by Fibonacci. Thanks to the possibility to reproduce on a computer, the Fibonacci’s different computational procedures, it was possible to identify some calculation errors present in the different versions of the original text.

  20. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  1. Computational Analysis of Nanoparticles-Molten Salt Thermal Energy Storage for Concentrated Solar Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Vinod [Univ. of Texas, El Paso, TX (United States)

    2017-05-05

    High fidelity computational models of thermocline-based thermal energy storage (TES) were developed. The research goal was to advance the understanding of a single tank nanofludized molten salt based thermocline TES system under various concentration and sizes of the particles suspension. Our objectives were to utilize sensible-heat that operates with least irreversibility by using nanoscale physics. This was achieved by performing computational analysis of several storage designs, analyzing storage efficiency and estimating cost effectiveness for the TES systems under a concentrating solar power (CSP) scheme using molten salt as the storage medium. Since TES is one of the most costly but important components of a CSP plant, an efficient TES system has potential to make the electricity generated from solar technologies cost competitive with conventional sources of electricity.

  2. Computer science handbook. Vol. 13.3. Environmental computer science. Computer science methods for environmental protection and environmental research

    International Nuclear Information System (INIS)

    Page, B.; Hilty, L.M.

    1994-01-01

    Environmental computer science is a new partial discipline of applied computer science, which makes use of methods and techniques of information processing in environmental protection. Thanks to the inter-disciplinary nature of environmental problems, computer science acts as a mediator between numerous disciplines and institutions in this sector. The handbook reflects the broad spectrum of state-of-the art environmental computer science. The following important subjects are dealt with: Environmental databases and information systems, environmental monitoring, modelling and simulation, visualization of environmental data and knowledge-based systems in the environmental sector. (orig.) [de

  3. Computational methods for protein identification from mass spectrometry data.

    Directory of Open Access Journals (Sweden)

    Leo McHugh

    2008-02-01

    Full Text Available Protein identification using mass spectrometry is an indispensable computational tool in the life sciences. A dramatic increase in the use of proteomic strategies to understand the biology of living systems generates an ongoing need for more effective, efficient, and accurate computational methods for protein identification. A wide range of computational methods, each with various implementations, are available to complement different proteomic approaches. A solid knowledge of the range of algorithms available and, more critically, the accuracy and effectiveness of these techniques is essential to ensure as many of the proteins as possible, within any particular experiment, are correctly identified. Here, we undertake a systematic review of the currently available methods and algorithms for interpreting, managing, and analyzing biological data associated with protein identification. We summarize the advances in computational solutions as they have responded to corresponding advances in mass spectrometry hardware. The evolution of scoring algorithms and metrics for automated protein identification are also discussed with a focus on the relative performance of different techniques. We also consider the relative advantages and limitations of different techniques in particular biological contexts. Finally, we present our perspective on future developments in the area of computational protein identification by considering the most recent literature on new and promising approaches to the problem as well as identifying areas yet to be explored and the potential application of methods from other areas of computational biology.

  4. Parallel scientific computing theory, algorithms, and applications of mesh based and meshless methods

    CERN Document Server

    Trobec, Roman

    2015-01-01

    This book is concentrated on the synergy between computer science and numerical analysis. It is written to provide a firm understanding of the described approaches to computer scientists, engineers or other experts who have to solve real problems. The meshless solution approach is described in more detail, with a description of the required algorithms and the methods that are needed for the design of an efficient computer program. Most of the details are demonstrated on solutions of practical problems, from basic to more complicated ones. This book will be a useful tool for any reader interes

  5. Big data mining analysis method based on cloud computing

    Science.gov (United States)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.

  6. A Krylov Subspace Method for Unstructured Mesh SN Transport Computation

    International Nuclear Information System (INIS)

    Yoo, Han Jong; Cho, Nam Zin; Kim, Jong Woon; Hong, Ser Gi; Lee, Young Ouk

    2010-01-01

    Hong, et al., have developed a computer code MUST (Multi-group Unstructured geometry S N Transport) for the neutral particle transport calculations in three-dimensional unstructured geometry. In this code, the discrete ordinates transport equation is solved by using the discontinuous finite element method (DFEM) or the subcell balance methods with linear discontinuous expansion. In this paper, the conventional source iteration in the MUST code is replaced by the Krylov subspace method to reduce computing time and the numerical test results are given

  7. Computational methods for high-energy source shielding

    International Nuclear Information System (INIS)

    Armstrong, T.W.; Cloth, P.; Filges, D.

    1983-01-01

    The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given

  8. Monte Carlo methods of PageRank computation

    NARCIS (Netherlands)

    Litvak, Nelli

    2004-01-01

    We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink

  9. Geometric optical transfer function and tis computation method

    International Nuclear Information System (INIS)

    Wang Qi

    1992-01-01

    Geometric Optical Transfer Function formula is derived after expound some content to be easily ignored, and the computation method is given with Bessel function of order zero and numerical integration and Spline interpolation. The method is of advantage to ensure accuracy and to save calculation

  10. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-09-19

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  11. Fully consistent CFD methods for incompressible flow computations

    DEFF Research Database (Denmark)

    Kolmogorov, Dmitry; Shen, Wen Zhong; Sørensen, Niels N.

    2014-01-01

    Nowadays collocated grid based CFD methods are one of the most e_cient tools for computations of the ows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure...

  12. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-01-01

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  13. A neutronic method to determine low hydrogen concentrations in metals

    International Nuclear Information System (INIS)

    Bennun, Leonardo; Santisteban, Javier; Diaz-Valdes, J.; Granada, J.R.; Mayer, R.E.

    2007-01-01

    We propose a method for the non-destructive determination of low hydrogen content in metals. The method is based on measurements of neutron inelastic scattering combined with cadmium filters. Determination is simple and the method would allow to construct a mobile device, to perform the analysis 'in situ'. We give a brief description of the usual methods to determine low hydrogen contents in solids, paying special attention to those methods supported by neutron techniques. We describe the proposed method, calculations to achieve a better sensitivity, and experimental results

  14. Computational methods for structural load and resistance modeling

    Science.gov (United States)

    Thacker, B. H.; Millwater, H. R.; Harren, S. V.

    1991-01-01

    An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.

  15. Computational mathematics models, methods, and analysis with Matlab and MPI

    CERN Document Server

    White, Robert E

    2004-01-01

    Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...

  16. Methods for computing water-quality loads at sites in the U.S. Geological Survey National Water Quality Network

    Science.gov (United States)

    Lee, Casey J.; Murphy, Jennifer C.; Crawford, Charles G.; Deacon, Jeffrey R.

    2017-10-24

    The U.S. Geological Survey publishes information on concentrations and loads of water-quality constituents at 111 sites across the United States as part of the U.S. Geological Survey National Water Quality Network (NWQN). This report details historical and updated methods for computing water-quality loads at NWQN sites. The primary updates to historical load estimation methods include (1) an adaptation to methods for computing loads to the Gulf of Mexico; (2) the inclusion of loads computed using the Weighted Regressions on Time, Discharge, and Season (WRTDS) method; and (3) the inclusion of loads computed using continuous water-quality data. Loads computed using WRTDS and continuous water-quality data are provided along with those computed using historical methods. Various aspects of method updates are evaluated in this report to help users of water-quality loading data determine which estimation methods best suit their particular application.

  17. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  18. Class of reconstructed discontinuous Galerkin methods in computational fluid dynamics

    International Nuclear Information System (INIS)

    Luo, Hong; Xia, Yidong; Nourgaliev, Robert

    2011-01-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness. (author)

  19. Data analysis through interactive computer animation method (DATICAM)

    International Nuclear Information System (INIS)

    Curtis, J.N.; Schwieder, D.H.

    1983-01-01

    DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process

  20. Multigrid methods for the computation of propagators in gauge fields

    International Nuclear Information System (INIS)

    Kalkreuter, T.

    1992-11-01

    In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. We discuss proper averaging operations for bosons and for staggered fermions. An efficient algorithm for computing C numerically is presented. The averaging kernels C can be used not only in deterministic multigrid computations, but also in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies of gauge theories. Actual numerical computations of kernels and propagators are performed in compact four-dimensional SU(2) gauge fields. (orig./HSI)

  1. Water demand forecasting: review of soft computing methods.

    Science.gov (United States)

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  2. Reconstruction method for fluorescent X-ray computed tomography by least-squares method using singular value decomposition

    Science.gov (United States)

    Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.

    1997-02-01

    We describe a new attenuation correction method for fluorescent X-ray computed tomography (FXCT) applied to image nonradioactive contrast materials in vivo. The principle of the FXCT imaging is that of computed tomography of the first generation. Using monochromatized synchrotron radiation from the BLNE-5A bending-magnet beam line of Tristan Accumulation Ring in KEK, Japan, we studied phantoms with the FXCT method, and we succeeded in delineating a 4-mm-diameter channel filled with a 500 /spl mu/g I/ml iodine solution in a 20-mm-diameter acrylic cylindrical phantom. However, to detect smaller iodine concentrations, attenuation correction is needed. We present a correction method based on the equation representing the measurement process. The discretized equation system is solved by the least-squares method using the singular value decomposition. The attenuation correction method is applied to the projections by the Monte Carlo simulation and the experiment to confirm its effectiveness.

  3. Studies on upgrading quality of zircon concentrate by flotation method

    International Nuclear Information System (INIS)

    Nguyen Duy Phap; Nguyen Duc Hung; Nguyen Duc Thai and others

    2004-01-01

    Results of this study show that it is possible to apply flotation to upgrading quality of zircon concentrate based on a simple technological flow-sheet and simple flotation agents. It is possible to use flotation equipment of Mexanov. Product criteria include ZrO 2 >64%; TiO 2 2 O 3 <0.1%. Flotation agents are simple, little noxious and easy for preparation. (PQM)

  4. Method and equipment to determine fine dust concentrations

    International Nuclear Information System (INIS)

    Breuer, H.; Gebhardt, J.; Robock, K.

    1979-01-01

    The measured values for the fine dust concentration are obtained by optical means which where possible agree with the probable deposited quantity of fine dust in the pulmonary alveoli. This is done by receiving the strong radiation produced in dependence of the particle size at an angle of 70 0 to the angle of incidence of the primary beam. The wavelength of the primary radiation is in the region of 1000 to 2000 nm. (RW) [de

  5. Short-term electric load forecasting using computational intelligence methods

    OpenAIRE

    Jurado, Sergio; Peralta, J.; Nebot, Àngela; Mugica, Francisco; Cortez, Paulo

    2013-01-01

    Accurate time series forecasting is a key issue to support individual and organizational decision making. In this paper, we introduce several methods for short-term electric load forecasting. All the presented methods stem from computational intelligence techniques: Random Forest, Nonlinear Autoregressive Neural Networks, Evolutionary Support Vector Machines and Fuzzy Inductive Reasoning. The performance of the suggested methods is experimentally justified with several experiments carried out...

  6. A stochastic method for computing hadronic matrix elements

    Energy Technology Data Exchange (ETDEWEB)

    Alexandrou, Constantia [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; The Cyprus Institute, Nicosia (Cyprus). Computational-based Science and Technology Research Center; Dinter, Simon; Drach, Vincent [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Jansen, Karl [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hadjiyiannakou, Kyriakos [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Renner, Dru B. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Collaboration: European Twisted Mass Collaboration

    2013-02-15

    We present a stochastic method for the calculation of baryon three-point functions that is more versatile compared to the typically used sequential method. We analyze the scaling of the error of the stochastically evaluated three-point function with the lattice volume and find a favorable signal-to-noise ratio suggesting that our stochastic method can be used efficiently at large volumes to compute hadronic matrix elements.

  7. Concentration Sensing by the Moving Nucleus in Cell Fate Determination: A Computational Analysis.

    Directory of Open Access Journals (Sweden)

    Varun Aggarwal

    Full Text Available During development of the vertebrate neuroepithelium, the nucleus in neural progenitor cells (NPCs moves from the apex toward the base and returns to the apex (called interkinetic nuclear migration at which point the cell divides. The fate of the resulting daughter cells is thought to depend on the sampling by the moving nucleus of a spatial concentration profile of the cytoplasmic Notch intracellular domain (NICD. However, the nucleus executes complex stochastic motions including random waiting and back and forth motions, which can expose the nucleus to randomly varying levels of cytoplasmic NICD. How nuclear position can determine daughter cell fate despite the stochastic nature of nuclear migration is not clear. Here we derived a mathematical model for reaction, diffusion, and nuclear accumulation of NICD in NPCs during interkinetic nuclear migration (INM. Using experimentally measured trajectory-dependent probabilities of nuclear turning, nuclear waiting times and average nuclear speeds in NPCs in the developing zebrafish retina, we performed stochastic simulations to compute the nuclear trajectory-dependent probabilities of NPC differentiation. Comparison with experimentally measured nuclear NICD concentrations and trajectory-dependent probabilities of differentiation allowed estimation of the NICD cytoplasmic gradient. Spatially polarized production of NICD, rapid NICD cytoplasmic consumption and the time-averaging effect of nuclear import/export kinetics are sufficient to explain the experimentally observed differentiation probabilities. Our computational studies lend quantitative support to the feasibility of the nuclear concentration-sensing mechanism for NPC fate determination in zebrafish retina.

  8. Method of processing concentrated liquid waste in nuclear power plant

    International Nuclear Information System (INIS)

    Hasegawa, Kazuyuki; Kitsukawa, Ryozo; Ohashi, Satoru.

    1988-01-01

    Purpose: To reduce the oxidizable material in the concentrated liquid wastes discharged from nuclear power plants. Constitution: Nitrate bacteria are added to liquid wastes in a storage tank for temporarily storing concentrated liquid wastes or relevant facilities thereof. That is, nitrites as the oxidizable material contained in the concentrated liquid wastes are converted into nitrate non-deleterious to solidification by utilizing biological reaction of nitrate bacteria. For making the conversion more effectively, required time for the biological reaction of the nitrate bacteria is maintained from the injection of nitrate bacteria to solidification, thereby providing advantageous conditions for the propagation of the nitrate bacteria. In this way, there is no problem for the increase of the volume of the powdery wastes formed by the addition of inhibitor for the effect of oxidizable material. Further, heating upon solidification which is indispensable so far is no more necessary to simplify the facility and the operation. Furthermore, the solidification inhibiting material can be reduced stably and reliably under the same operation conditions even if the composition of the liquid wastes is charged or varied. (Kamimura, M.)

  9. The Direct Lighting Computation in Global Illumination Methods

    Science.gov (United States)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  10. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...

  11. Integrating computational methods to retrofit enzymes to synthetic pathways.

    Science.gov (United States)

    Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula

    2012-02-01

    Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.

  12. Computational method for an axisymmetric laser beam scattered by a body of revolution

    International Nuclear Information System (INIS)

    Combis, P.; Robiche, J.

    2005-01-01

    An original hybrid computational method to solve the 2-D problem of the scattering of an axisymmetric laser beam by an arbitrary-shaped inhomogeneous body of revolution is presented. This method relies on a domain decomposition of the scattering zone into concentric spherical radially homogeneous sub-domains and on an expansion of the angular dependence of the fields on the Legendre polynomials. Numerical results for the fields obtained for various scatterers geometries are presented and analyzed. (authors)

  13. The Experiment Method for Manufacturing Grid Development on Single Computer

    Institute of Scientific and Technical Information of China (English)

    XIAO Youan; ZHOU Zude

    2006-01-01

    In this paper, an experiment method for the Manufacturing Grid application system development in the single personal computer environment is proposed. The characteristic of the proposed method is constructing a full prototype Manufacturing Grid application system which is hosted on a single personal computer with the virtual machine technology. Firstly, it builds all the Manufacturing Grid physical resource nodes on an abstraction layer of a single personal computer with the virtual machine technology. Secondly, all the virtual Manufacturing Grid resource nodes will be connected with virtual network and the application software will be deployed on each Manufacturing Grid nodes. Then, we can obtain a prototype Manufacturing Grid application system which is working in the single personal computer, and can carry on the experiment on this foundation. Compared with the known experiment methods for the Manufacturing Grid application system development, the proposed method has the advantages of the known methods, such as cost inexpensively, operation simple, and can get the confidence experiment result easily. The Manufacturing Grid application system constructed with the proposed method has the high scalability, stability and reliability. It is can be migrated to the real application environment rapidly.

  14. Computational simulation in architectural and environmental acoustics methods and applications of wave-based computation

    CERN Document Server

    Sakamoto, Shinichi; Otsuru, Toru

    2014-01-01

    This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.  

  15. Computational method and system for modeling, analyzing, and optimizing DNA amplification and synthesis

    Science.gov (United States)

    Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.

    2010-05-04

    A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.

  16. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems

  17. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems

  18. Application of statistical method for FBR plant transient computation

    International Nuclear Information System (INIS)

    Kikuchi, Norihiro; Mochizuki, Hiroyasu

    2014-01-01

    Highlights: • A statistical method with a large trial number up to 10,000 is applied to the plant system analysis. • A turbine trip test conducted at the “Monju” reactor is selected as a plant transient. • A reduction method of trial numbers is discussed. • The result with reduced trial number can express the base regions of the computed distribution. -- Abstract: It is obvious that design tolerances, errors included in operation, and statistical errors in empirical correlations effect on the transient behavior. The purpose of the present study is to apply above mentioned statistical errors to a plant system computation in order to evaluate the statistical distribution contained in the transient evolution. A selected computation case is the turbine trip test conducted at 40% electric power of the prototype fast reactor “Monju”. All of the heat transport systems of “Monju” are modeled with the NETFLOW++ system code which has been validated using the plant transient tests of the experimental fast reactor Joyo, and “Monju”. The effects of parameters on upper plenum temperature are confirmed by sensitivity analyses, and dominant parameters are chosen. The statistical errors are applied to each computation deck by using a pseudorandom number and the Monte-Carlo method. The dSFMT (Double precision SIMD-oriented Fast Mersenne Twister) that is developed version of Mersenne Twister (MT), is adopted as the pseudorandom number generator. In the present study, uniform random numbers are generated by dSFMT, and these random numbers are transformed to the normal distribution by the Box–Muller method. Ten thousands of different computations are performed at once. In every computation case, the steady calculation is performed for 12,000 s, and transient calculation is performed for 4000 s. In the purpose of the present statistical computation, it is important that the base regions of distribution functions should be calculated precisely. A large number of

  19. Electron beam treatment planning: A review of dose computation methods

    International Nuclear Information System (INIS)

    Mohan, R.; Riley, R.; Laughlin, J.S.

    1983-01-01

    Various methods of dose computations are reviewed. The equivalent path length methods used to account for body curvature and internal structure are not adequate because they ignore the lateral diffusion of electrons. The Monte Carlo method for the broad field three-dimensional situation in treatment planning is impractical because of the enormous computer time required. The pencil beam technique may represent a suitable compromise. The behavior of a pencil beam may be described by the multiple scattering theory or, alternatively, generated using the Monte Carlo method. Although nearly two orders of magnitude slower than the equivalent path length technique, the pencil beam method improves accuracy sufficiently to justify its use. It applies very well when accounting for the effect of surface irregularities; the formulation for handling inhomogeneous internal structure is yet to be developed

  20. Estimation of suspended sediment concentration in rivers using acoustic methods.

    Science.gov (United States)

    Elçi, Sebnem; Aydin, Ramazan; Work, Paul A

    2009-12-01

    Acoustic Doppler current meters (ADV, ADCP, and ADP) are widely used in water systems to measure flow velocities and velocity profiles. Although these meters are designed for flow velocity measurements, they can also provide information defining the quantity of particulate matter in the water, after appropriate calibration. When an acoustic instrument is calibrated for a water system, no additional sensor is needed to measure suspended sediment concentration (SSC). This provides the simultaneous measurements of velocity and concentration required for most sediment transport studies. The performance of acoustic Doppler current meters for measuring SSC was investigated in different studies where signal-to-noise ratio (SNR) and suspended sediment concentration were related using different formulations. However, these studies were each limited to a single study site where neither the effect of particle size nor the effect of temperature was investigated. In this study, different parameters that affect the performance of an ADV for the prediction of SSC are investigated. In order to investigate the reliability of an ADV for SSC measurements in different environments, flow and SSC measurements were made in different streams located in the Aegean region of Turkey having different soil types. Soil samples were collected from all measuring stations and particle size analysis was conducted by mechanical means. Multivariate analysis was utilized to investigate the effect of soil type and water temperature on the measurements. Statistical analysis indicates that SNR readings ob tained from the ADV are affected by water temperature and particle size distribution of the soil, as expected, and a prediction model is presented relating SNR readings to SSC mea surements where both water temperature and sediment characteristics type are incorporated into the model. The coefficients of the suggested model were obtained using the multivariate anal ysis. Effect of high turbidity

  1. Application of tripolar concentric electrodes and prefeature selection algorithm for brain-computer interface.

    Science.gov (United States)

    Besio, Walter G; Cao, Hongbao; Zhou, Peng

    2008-04-01

    For persons with severe disabilities, a brain-computer interface (BCI) may be a viable means of communication. Lapalacian electroencephalogram (EEG) has been shown to improve classification in EEG recognition. In this work, the effectiveness of signals from tripolar concentric electrodes and disc electrodes were compared for use as a BCI. Two sets of left/right hand motor imagery EEG signals were acquired. An autoregressive (AR) model was developed for feature extraction with a Mahalanobis distance based linear classifier for classification. An exhaust selection algorithm was employed to analyze three factors before feature extraction. The factors analyzed were 1) length of data in each trial to be used, 2) start position of data, and 3) the order of the AR model. The results showed that tripolar concentric electrodes generated significantly higher classification accuracy than disc electrodes.

  2. Computational methods in several fields of radiation dosimetry

    International Nuclear Information System (INIS)

    Paretzke, Herwig G.

    2010-01-01

    Full text: Radiation dosimetry has to cope with a wide spectrum of applications and requirements in time and size. The ubiquitous presence of various radiation fields or radionuclides in the human home, working, urban or agricultural environment can lead to various dosimetric tasks starting from radioecology, retrospective and predictive dosimetry, personal dosimetry, up to measurements of radionuclide concentrations in environmental and food product and, finally in persons and their excreta. In all these fields measurements and computational models for the interpretation or understanding of observations are employed explicitly or implicitly. In this lecture some examples of own computational models will be given from the various dosimetric fields, including a) Radioecology (e.g. with the code systems based on ECOSYS, which was developed far before the Chernobyl reactor accident, and tested thoroughly afterwards), b) Internal dosimetry (improved metabolism models based on our own data), c) External dosimetry (with the new ICRU-ICRP-Voxelphantom developed by our lab), d) Radiation therapy (with GEANT IV as applied to mixed reactor radiation incident on individualized voxel phantoms), e) Some aspects of nanodosimetric track structure computations (not dealt with in the other presentation of this author). Finally, some general remarks will be made on the high explicit or implicit importance of computational models in radiation protection and other research field dealing with large systems, as well as on good scientific practices which should generally be followed when developing and applying such computational models

  3. Combustible gas concentration control facility and operation method therefor

    International Nuclear Information System (INIS)

    Yoshikawa, Kazuhiro; Ando, Koji; Kinoshita, Shoichiro; Yamanari, Shozo; Moriya, Kimiaki; Karasawa, Hidetoshi

    1998-01-01

    The present invention provides a hydrogen gas-control facility by using a fuel battery-type combustible gas concentration reducing device as a countermeasure for controlling a hydrogen gas in a reactor container. Namely, a hydrogen electrode adsorb hydrogen by using an ion exchange membrane comprising hydrogen ions as a charge carrier. An air electrode adsorb oxygen in the air. A fuel battery converts recombining energy of hydrogen and oxygen to electric energy. Hydrogen in this case is supplied from an atmosphere in the container. Oxygen in this case is supplied from the air outside of the container. If hydrogen gas should be generated in the reactor, power generation of is performed by the fuel battery by using hydrogen gas, as a fuel, on the side of the hydrogen electrode of the fuel battery and using oxygen, as a fuel, in the air outside of the container on the side of the air electrode. Then, the hydrogen gas is consumed thereby controlling the hydrogen gas concentration in the container. Electric current generated in the fuel battery is used as an emergency power source for the countermeasure for a severe accident. (I.S.)

  4. Combustible gas concentration control facility and operation method therefor

    Energy Technology Data Exchange (ETDEWEB)

    Yoshikawa, Kazuhiro; Ando, Koji; Kinoshita, Shoichiro; Yamanari, Shozo; Moriya, Kimiaki; Karasawa, Hidetoshi

    1998-09-25

    The present invention provides a hydrogen gas-control facility by using a fuel battery-type combustible gas concentration reducing device as a countermeasure for controlling a hydrogen gas in a reactor container. Namely, a hydrogen electrode adsorb hydrogen by using an ion exchange membrane comprising hydrogen ions as a charge carrier. An air electrode adsorb oxygen in the air. A fuel battery converts recombining energy of hydrogen and oxygen to electric energy. Hydrogen in this case is supplied from an atmosphere in the container. Oxygen in this case is supplied from the air outside of the container. If hydrogen gas should be generated in the reactor, power generation of is performed by the fuel battery by using hydrogen gas, as a fuel, on the side of the hydrogen electrode of the fuel battery and using oxygen, as a fuel, in the air outside of the container on the side of the air electrode. Then, the hydrogen gas is consumed thereby controlling the hydrogen gas concentration in the container. Electric current generated in the fuel battery is used as an emergency power source for the countermeasure for a severe accident. (I.S.)

  5. Method of concentrating and separating lithium isotopes by laser

    International Nuclear Information System (INIS)

    Yamashita, Mikio; Kashiwagi, Hiroshi.

    1976-01-01

    Purpose: To eliminate the need for repeating separating operation in many stages and permit concentrated lithium of high purity to be obtained in a short period of time. Constitution: Lithium atom vapor is irradiated by a laser of wavelengths resonant to 6 Li or 7 Li absorption spectra present in the neighborhood of 6,707.84 A or 3,232.61 A (chromatic laser being used for oscillation in the neighborhood of 6,707.84 A and ultraviolet laser used for oscillation in the neighborhood of 6,707.84 A) for selectively exciting 6 Li or 7 Li alone. Then, ionization is brought about by using other types of lasers (ultraviolet Ar ion laser being used as the former wavelength laser and visible Ar ion laser as the latter wavelength laser), and only the ionized isotopes are passed through a mass filter and collected by an ion collector, thereby effecting separation of the ionized isotopes from the non-ionized neutral isotopes and their concentration. (Aizawa, K.)

  6. Computational methods for three-dimensional microscopy reconstruction

    CERN Document Server

    Frank, Joachim

    2014-01-01

    Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology.   Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.

  7. Computations of finite temperature QCD with the pseudofermion method

    International Nuclear Information System (INIS)

    Fucito, F.; Solomon, S.

    1985-01-01

    The authors discuss the phase diagram of finite temperature QCD as it is obtained including the effects of dynamical quarks by the pseudofermion method. They compare their results with the results obtained by other groups and comment on the actual state of the art for these kind of computations

  8. Multiscale methods in computational fluid and solid mechanics

    NARCIS (Netherlands)

    Borst, de R.; Hulshoff, S.J.; Lenz, S.; Munts, E.A.; Brummelen, van E.H.; Wall, W.; Wesseling, P.; Onate, E.; Periaux, J.

    2006-01-01

    First, an attempt is made towards gaining a more systematic understanding of recent progress in multiscale modelling in computational solid and fluid mechanics. Sub- sequently, the discussion is focused on variational multiscale methods for the compressible and incompressible Navier-Stokes

  9. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  10. STUDIES OF METHOD FOR DETERMINING THE PROTEIN CONCENTRATION OF "MALEIN PPD" BY THE KJELDAHL METHOD

    Directory of Open Access Journals (Sweden)

    Ciuca, V

    2017-06-01

    Full Text Available Glanders is a contagious and fatal disease of horses, donkeys, and mules, caused by infection with the bacterium Burkholderia mallei. The pathogen causes nodules and ulcerations in the upper respiratory tract and lungs. Glanders is transmissible to humans by direct contact with diseased animals or with infected or contaminated material. In the untreated acute disease, the mortality rate can reach 95% within 3 weeks Malein PPD - the diagnostic product contain max 2mg/ml Burkholderia mallei. The amount of protein in the biological product "Malein PPD" is measured as nitrogen from protein molecule, applying the Kjeldahl (method determination of nitrogen by sulphuric acid digestion. The validation study aims to demonstrate the determination of the protein of the Malein PPD, by sulphuric acid digestion, it is an appropriate analytical method, reproducible and meets the quality requirements of diagnostic reagents. The paper establishes the performance characteristics of the method considered and identify the factors that influence these characteristics. The method for determining the concentration of protein, by the Kjeldahl method is considered valid if the results obtained for each validation parameter are within the admissibility criteria.The validation procedure includes details on protocol working to determine the protein of the Malein PPD, validation criteria, experimental results, mathematical calculations.

  11. A modified method to determine biomass concentration as COD in ...

    African Journals Online (AJOL)

    drinie

    2002-10-04

    Oct 4, 2002 ... A modification of the standard VSS technique was also proposed using two membranes in the filtration device; this technique allowed the biomass determination in 1 μm size bacteria cultures that cannot be detected by the standard VSS method because cells are not retained by the 1.5 μm diameter pore ...

  12. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  13. Recent Development in Rigorous Computational Methods in Dynamical Systems

    OpenAIRE

    Arai, Zin; Kokubu, Hiroshi; Pilarczyk, Paweł

    2009-01-01

    We highlight selected results of recent development in the area of rigorous computations which use interval arithmetic to analyse dynamical systems. We describe general ideas and selected details of different ways of approach and we provide specific sample applications to illustrate the effectiveness of these methods. The emphasis is put on a topological approach, which combined with rigorous calculations provides a broad range of new methods that yield mathematically rel...

  14. Method and system for environmentally adaptive fault tolerant computing

    Science.gov (United States)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  15. Numerical evaluation of methods for computing tomographic projections

    International Nuclear Information System (INIS)

    Zhuang, W.; Gopal, S.S.; Hebert, T.J.

    1994-01-01

    Methods for computing forward/back projections of 2-D images can be viewed as numerical integration techniques. The accuracy of any ray-driven projection method can be improved by increasing the number of ray-paths that are traced per projection bin. The accuracy of pixel-driven projection methods can be increased by dividing each pixel into a number of smaller sub-pixels and projecting each sub-pixel. The authors compared four competing methods of computing forward/back projections: bilinear interpolation, ray-tracing, pixel-driven projection based upon sub-pixels, and pixel-driven projection based upon circular, rather than square, pixels. This latter method is equivalent to a fast, bi-nonlinear interpolation. These methods and the choice of the number of ray-paths per projection bin or the number of sub-pixels per pixel present a trade-off between computational speed and accuracy. To solve the problem of assessing backprojection accuracy, the analytical inverse Fourier transform of the ramp filtered forward projection of the Shepp and Logan head phantom is derived

  16. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  17. A magnetic method to concentrate and trap biological targets

    KAUST Repository

    Li, Fuquan

    2012-11-01

    Magnetoresistive sensors in combination with magnetic particles have been used in biological applications due to, e.g., their small size and high sensitivity. A growing interest is to integrate magnetoresistive sensors with microchannels and electronics to fabricate devices that can perform complex analyses. A major task in such systems is to immobilize magnetic particles on top of the sensor surface, which is required to detect the particles\\' stray field. In the presented work, a bead concentrator, consisting of gold microstructures, at the bottom of a microchannel, is used to attract and move magnetic particles into a trap. The trap is made of a chamber with a gold microstructure underneath and is used to attract and immobilize a defined number of magnetic beads. In order to detect targets, two kinds of solutions were prepared; one containing only superparamagnetic particles, the other one containing beads with the protein Bovine serum albumin as the target and fluorescent markers. Due to the size difference between bare beads and beads with target, less magnetic beads were immobilized inside the volume chamber in case of magnetic beads with target as compared to bare magnetic beads. © 1965-2012 IEEE.

  18. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  19. Uncertainty assessment of the breath methane concentration method to determine methane production of dairy cows

    NARCIS (Netherlands)

    Wu, Liansun; Groot Koerkamp, Peter W.G.; Ogink, Nico

    2018-01-01

    The breath methane concentration method uses the methane concentrations in the cow's breath during feed bin visits as a proxy for the methane production rate. The objective of this study was to assess the uncertainty of a breath methane concentration method in a feeder and its capability to measure

  20. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  1. Computational electrodynamics the finite-difference time-domain method

    CERN Document Server

    Taflove, Allen

    2005-01-01

    This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.

  2. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  3. Evolutionary Computation Methods and their applications in Statistics

    Directory of Open Access Journals (Sweden)

    Francesco Battaglia

    2013-05-01

    Full Text Available A brief discussion of the genesis of evolutionary computation methods, their relationship to artificial intelligence, and the contribution of genetics and Darwin’s theory of natural evolution is provided. Then, the main evolutionary computation methods are illustrated: evolution strategies, genetic algorithms, estimation of distribution algorithms, differential evolution, and a brief description of some evolutionary behavior methods such as ant colony and particle swarm optimization. We also discuss the role of the genetic algorithm for multivariate probability distribution random generation, rather than as a function optimizer. Finally, some relevant applications of genetic algorithm to statistical problems are reviewed: selection of variables in regression, time series model building, outlier identification, cluster analysis, design of experiments.

  4. Variational-moment method for computing magnetohydrodynamic equilibria

    International Nuclear Information System (INIS)

    Lao, L.L.

    1983-08-01

    A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since has been generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These recent developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed

  5. Yeast viability and concentration analysis using lens-free computational microscopy and machine learning

    Science.gov (United States)

    Feizi, Alborz; Zhang, Yibo; Greenbaum, Alon; Guziak, Alex; Luong, Michelle; Chan, Raymond Yan Lok; Berg, Brandon; Ozkan, Haydar; Luo, Wei; Wu, Michael; Wu, Yichen; Ozcan, Aydogan

    2017-03-01

    Research laboratories and the industry rely on yeast viability and concentration measurements to adjust fermentation parameters such as pH, temperature, and pressure. Beer-brewing processes as well as biofuel production can especially utilize a cost-effective and portable way of obtaining data on cell viability and concentration. However, current methods of analysis are relatively costly and tedious. Here, we demonstrate a rapid, portable, and cost-effective platform for imaging and measuring viability and concentration of yeast cells. Our platform features a lens-free microscope that weighs 70 g and has dimensions of 12 × 4 × 4 cm. A partially-coherent illumination source (a light-emitting-diode), a band-pass optical filter, and a multimode optical fiber are used to illuminate the sample. The yeast sample is directly placed on a complementary metal-oxide semiconductor (CMOS) image sensor chip, which captures an in-line hologram of the sample over a large field-of-view of >20 mm2. The hologram is transferred to a touch-screen interface, where a trained Support Vector Machine model classifies yeast cells stained with methylene blue as live or dead and measures cell viability as well as concentration. We tested the accuracy of our platform against manual counting of live and dead cells using fluorescent exclusion staining and a bench-top fluorescence microscope. Our regression analysis showed no significant difference between the two methods within a concentration range of 1.4 × 105 to 1.4 × 106 cells/mL. This compact and cost-effective yeast analysis platform will enable automatic quantification of yeast viability and concentration in field settings and resource-limited environments.

  6. Computer-aided methods of determining thyristor thermal transients

    International Nuclear Information System (INIS)

    Lu, E.; Bronner, G.

    1988-08-01

    An accurate tracing of the thyristor thermal response is investigated. This paper offers several alternatives for thermal modeling and analysis by using an electrical circuit analog: topological method, convolution integral method, etc. These methods are adaptable to numerical solutions and well suited to the use of the digital computer. The thermal analysis of thyristors was performed for the 1000 MVA converter system at the Princeton Plasma Physics Laboratory. Transient thermal impedance curves for individual thyristors in a given cooling arrangement were known from measurements and from manufacturer's data. The analysis pertains to almost any loading case, and the results are obtained in a numerical or a graphical format. 6 refs., 9 figs

  7. Effect of Saline Pushing after Contrast Material Injection in Abdominal Multidetector Computed Tomography with the Use of Different Iodine Concentrations

    International Nuclear Information System (INIS)

    Tatsugami, F.; Matsuki, M.; Kani, H.; Tanikake, M.; Miyao, M.; Yoshikawa, S.; Narabayashi, I.

    2006-01-01

    Purpose: To investigate whether saline pushing after contrast material improves hepatic vascular and parenchymal enhancement, and to determine whether this technique permits decreased contrast material concentration. Material and Methods: 120 patients who underwent hepatic multidetector computed tomography were divided randomly into four groups (Groups A-D): receiving 100 ml of contrast material (300 mgI/ml) only (A) or with 50 ml of saline solution (B); or 100 ml of contrast material (350 mgI/ml) only (C) or with 50 ml of saline solution (D). Computed tomography (CT) values of the aorta in the arterial phase, the portal vein in the portal venous inflow phase, and the liver in the hepatic phase were measured. Visualization of the hepatic artery and the portal vein by 3D CT angiography was evaluated as well. Results: Although the enhancement values of the aorta were not improved significantly with saline pushing, they continued at a high level to the latter slices with saline pushing. The enhancement value of the portal vein increased significantly and CT portography was improved with saline pushing. The enhancement value of the liver was not improved significantly using saline pushing. In a comparison between groups B and C, the enhancement values of the aorta and portal vein and the visualization of CT arteriography and portography were not statistically different. Conclusion: The saline pushing technique can contribute to a decrease in contrast material concentration for 3D CT arteriography and portography

  8. Computational methods for coupling microstructural and micromechanical materials response simulations

    Energy Technology Data Exchange (ETDEWEB)

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  9. Fast calculation method for computer-generated cylindrical holograms.

    Science.gov (United States)

    Yamaguchi, Takeshi; Fujii, Tomohiko; Yoshikawa, Hiroshi

    2008-07-01

    Since a general flat hologram has a limited viewable area, we usually cannot see the other side of a reconstructed object. There are some holograms that can solve this problem. A cylindrical hologram is well known to be viewable in 360 deg. Most cylindrical holograms are optical holograms, but there are few reports of computer-generated cylindrical holograms. The lack of computer-generated cylindrical holograms is because the spatial resolution of output devices is not great enough; therefore, we have to make a large hologram or use a small object to fulfill the sampling theorem. In addition, in calculating the large fringe, the calculation amount increases in proportion to the hologram size. Therefore, we propose what we believe to be a new calculation method for fast calculation. Then, we print these fringes with our prototype fringe printer. As a result, we obtain a good reconstructed image from a computer-generated cylindrical hologram.

  10. Computational methods in metabolic engineering for strain design.

    Science.gov (United States)

    Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L

    2015-08-01

    Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Method of Computer-aided Instruction in Situation Control Systems

    Directory of Open Access Journals (Sweden)

    Anatoliy O. Kargin

    2013-01-01

    Full Text Available The article considers the problem of computer-aided instruction in context-chain motivated situation control system of the complex technical system behavior. The conceptual and formal models of situation control with practical instruction are considered. Acquisition of new behavior knowledge is presented as structural changes in system memory in the form of situational agent set. Model and method of computer-aided instruction represent formalization, based on the nondistinct theories by physiologists and cognitive psychologists.The formal instruction model describes situation and reaction formation and dependence on different parameters, effecting education, such as the reinforcement value, time between the stimulus, action and the reinforcement. The change of the contextual link between situational elements when using is formalized.The examples and results of computer instruction experiments of the robot device “LEGO MINDSTORMS NXT”, equipped with ultrasonic distance, touch, light sensors.

  12. A new fault detection method for computer networks

    International Nuclear Information System (INIS)

    Lu, Lu; Xu, Zhengguo; Wang, Wenhai; Sun, Youxian

    2013-01-01

    Over the past few years, fault detection for computer networks has attracted extensive attentions for its importance in network management. Most existing fault detection methods are based on active probing techniques which can detect the occurrence of faults fast and precisely. But these methods suffer from the limitation of traffic overhead, especially in large scale networks. To relieve traffic overhead induced by active probing based methods, a new fault detection method, whose key is to divide the detection process into multiple stages, is proposed in this paper. During each stage, only a small region of the network is detected by using a small set of probes. Meanwhile, it also ensures that the entire network can be covered after multiple detection stages. This method can guarantee that the traffic used by probes during each detection stage is small sufficiently so that the network can operate without severe disturbance from probes. Several simulation results verify the effectiveness of the proposed method

  13. Practical methods to improve the development of computational software

    International Nuclear Information System (INIS)

    Osborne, A. G.; Harding, D. W.; Deinert, M. R.

    2013-01-01

    The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)

  14. Computing homography with RANSAC algorithm: a novel method of registration

    Science.gov (United States)

    Li, Xiaowei; Liu, Yue; Wang, Yongtian; Yan, Dayuan

    2005-02-01

    An AR (Augmented Reality) system can integrate computer-generated objects with the image sequences of real world scenes in either an off-line or a real-time way. Registration, or camera pose estimation, is one of the key techniques to determine its performance. The registration methods can be classified as model-based and move-matching. The former approach can accomplish relatively accurate registration results, but it requires the precise model of the scene, which is hard to be obtained. The latter approach carries out registration by computing the ego-motion of the camera. Because it does not require the prior-knowledge of the scene, its registration results sometimes turn out to be less accurate. When the model defined is as simple as a plane, a mixed method is introduced to take advantages of the virtues of the two methods mentioned above. Although unexpected objects often occlude this plane in an AR system, one can still try to detect corresponding points with a contract-expand method, while this will import erroneous correspondences. Computing homography with RANSAC algorithm is used to overcome such shortcomings. Using the robustly estimated homography resulted from RANSAC, the camera projective matrix can be recovered and thus registration is accomplished even when the markers are lost in the scene.

  15. Pair Programming as a Modern Method of Teaching Computer Science

    Directory of Open Access Journals (Sweden)

    Irena Nančovska Šerbec

    2008-10-01

    Full Text Available At the Faculty of Education, University of Ljubljana we educate future computer science teachers. Beside didactical, pedagogical, mathematical and other interdisciplinary knowledge, students gain knowledge and skills of programming that are crucial for computer science teachers. For all courses, the main emphasis is the absorption of professional competences, related to the teaching profession and the programming profile. The latter are selected according to the well-known document, the ACM Computing Curricula. The professional knowledge is therefore associated and combined with the teaching knowledge and skills. In the paper we present how to achieve competences related to programming by using different didactical models (semiotic ladder, cognitive objectives taxonomy, problem solving and modern teaching method “pair programming”. Pair programming differs from standard methods (individual work, seminars, projects etc.. It belongs to the extreme programming as a discipline of software development and is known to have positive effects on teaching first programming language. We have experimentally observed pair programming in the introductory programming course. The paper presents and analyzes the results of using this method: the aspects of satisfaction during programming and the level of gained knowledge. The results are in general positive and demonstrate the promising usage of this teaching method.

  16. Applications of meshless methods for damage computations with finite strains

    International Nuclear Information System (INIS)

    Pan Xiaofei; Yuan Huang

    2009-01-01

    Material defects such as cavities have great effects on the damage process in ductile materials. Computations based on finite element methods (FEMs) often suffer from instability due to material failure as well as large distortions. To improve computational efficiency and robustness the element-free Galerkin (EFG) method is applied in the micro-mechanical constitute damage model proposed by Gurson and modified by Tvergaard and Needleman (the GTN damage model). The EFG algorithm is implemented in the general purpose finite element code ABAQUS via the user interface UEL. With the help of the EFG method, damage processes in uniaxial tension specimens and notched specimens are analyzed and verified with experimental data. Computational results reveal that the damage which takes place in the interior of specimens will extend to the exterior and cause fracture of specimens; the damage is a fast procedure relative to the whole tensing process. The EFG method provides more stable and robust numerical solution in comparing with the FEM analysis

  17. Improved computation method in residual life estimation of structural components

    Directory of Open Access Journals (Sweden)

    Maksimović Stevan M.

    2013-01-01

    Full Text Available This work considers the numerical computation methods and procedures for the fatigue crack growth predicting of cracked notched structural components. Computation method is based on fatigue life prediction using the strain energy density approach. Based on the strain energy density (SED theory, a fatigue crack growth model is developed to predict the lifetime of fatigue crack growth for single or mixed mode cracks. The model is based on an equation expressed in terms of low cycle fatigue parameters. Attention is focused on crack growth analysis of structural components under variable amplitude loads. Crack growth is largely influenced by the effect of the plastic zone at the front of the crack. To obtain efficient computation model plasticity-induced crack closure phenomenon is considered during fatigue crack growth. The use of the strain energy density method is efficient for fatigue crack growth prediction under cyclic loading in damaged structural components. Strain energy density method is easy for engineering applications since it does not require any additional determination of fatigue parameters (those would need to be separately determined for fatigue crack propagation phase, and low cyclic fatigue parameters are used instead. Accurate determination of fatigue crack closure has been a complex task for years. The influence of this phenomenon can be considered by means of experimental and numerical methods. Both of these models are considered. Finite element analysis (FEA has been shown to be a powerful and useful tool1,6 to analyze crack growth and crack closure effects. Computation results are compared with available experimental results. [Projekat Ministarstva nauke Republike Srbije, br. OI 174001

  18. CENTAURE, a numerical model for the computation of the flow and isotopic concentration fields in a gas centrifuge

    International Nuclear Information System (INIS)

    Soubbaramayer

    1977-01-01

    A numerical code (CENTAURE) built up with 36000 cards and 343 subroutines to investigate the full interconnected field of velocity, temperature, pressure and isotopic concentration in a gas centrifuge is presented. The complete set of Navier-Stokes equations, continuity equation, energy balance, isotopic diffusion equation and gas state law, form the basis of the model with proper boundary conditions, depending essentially upon the nature of the countercurrent and the thermal condition of the walls. Sources and sinks are located either inside the centrifuge or in the boundaries. The model includes not only the usual terms of CORIOLIS, compressibility, viscosity and thermal diffusivity but also the non linear terms of inertia in momentum equations, thermal convection and viscous dissipation in energy equation. The computation is based on finite element method and direct resolution instead of finite difference and iterative process. The code is quite flexible and well adapted to compute many physical cases in one centrifuge: the computation time per one case is then very small (we work with an IBM-360-91). The numerical results are exploited with the help of a visualisation screen IBM 2250. The possibilities of the code are exposed with numerical illustration. Some results are commented and compared to linear theories

  19. An introduction to computer simulation methods applications to physical systems

    CERN Document Server

    Gould, Harvey; Christian, Wolfgang

    2007-01-01

    Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...

  20. NATO Advanced Study Institute on Methods in Computational Molecular Physics

    CERN Document Server

    Diercksen, Geerd

    1992-01-01

    This volume records the lectures given at a NATO Advanced Study Institute on Methods in Computational Molecular Physics held in Bad Windsheim, Germany, from 22nd July until 2nd. August, 1991. This NATO Advanced Study Institute sought to bridge the quite considerable gap which exist between the presentation of molecular electronic structure theory found in contemporary monographs such as, for example, McWeeny's Methods 0/ Molecular Quantum Mechanics (Academic Press, London, 1989) or Wilson's Electron correlation in moleeules (Clarendon Press, Oxford, 1984) and the realization of the sophisticated computational algorithms required for their practical application. It sought to underline the relation between the electronic structure problem and the study of nuc1ear motion. Software for performing molecular electronic structure calculations is now being applied in an increasingly wide range of fields in both the academic and the commercial sectors. Numerous applications are reported in areas as diverse as catalysi...

  1. An Adaptive Reordered Method for Computing PageRank

    Directory of Open Access Journals (Sweden)

    Yi-Ming Bu

    2013-01-01

    Full Text Available We propose an adaptive reordered method to deal with the PageRank problem. It has been shown that one can reorder the hyperlink matrix of PageRank problem to calculate a reduced system and get the full PageRank vector through forward substitutions. This method can provide a speedup for calculating the PageRank vector. We observe that in the existing reordered method, the cost of the recursively reordering procedure could offset the computational reduction brought by minimizing the dimension of linear system. With this observation, we introduce an adaptive reordered method to accelerate the total calculation, in which we terminate the reordering procedure appropriately instead of reordering to the end. Numerical experiments show the effectiveness of this adaptive reordered method.

  2. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  3. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  4. The comparison of assessment of pigeon semen motility and sperm concentration by conventional methods and the CASA system (HTM IVOS).

    Science.gov (United States)

    Klimowicz, M D; Nizanski, W; Batkowski, F; Savic, M A

    2008-07-01

    The aim of these experiments was to compare conventional, microscopic methods of evaluating pigeon sperm motility and concentration to those measured by computer-assisted sperm analysis (CASA system). Semen was collected twice a week from two groups of pigeons, each of 40 males (group I: meat-type breed; group II: fancy pigeon) using the lumbo-sacral and cloacal region massage method. Ejaculates collected in each group were diluted 1:100 in BPSE solution and divided into two equal samples. One sample was examined subjectively by microscope and the second one was analysed using CASA system. The sperm concentration was measured by CASA using the anti-collision (AC) system and fluorescent staining (IDENT). There were not any significant differences between the methods of evaluation of sperm concentration. High positive correlations in both groups were observed between the sperm concentration estimated by Thom counting chamber and AC (r=0.87 and r=0.91, respectively), and between the sperm concentration evaluated by Thom counting chamber and IDENT (r=0.85 and r=0.90, respectively). The mean values for CASA measurement of proportion of motile spermatozoa (MOT) and progressive movement (PMOT) were significantly lower than the values estimated subjectively in both groups of pigeons (pCASA system is very rapid, objective and sensitive method in detecting subtle motility characteristics as well as sperm concentration and is recommended for future research into pigeon semen.

  5. Advanced soft computing diagnosis method for tumour grading.

    Science.gov (United States)

    Papageorgiou, E I; Spyridonos, P P; Stylios, C D; Ravazoula, P; Groumpos, P P; Nikiforidis, G N

    2006-01-01

    To develop an advanced diagnostic method for urinary bladder tumour grading. A novel soft computing modelling methodology based on the augmentation of fuzzy cognitive maps (FCMs) with the unsupervised active Hebbian learning (AHL) algorithm is applied. One hundred and twenty-eight cases of urinary bladder cancer were retrieved from the archives of the Department of Histopathology, University Hospital of Patras, Greece. All tumours had been characterized according to the classical World Health Organization (WHO) grading system. To design the FCM model for tumour grading, three experts histopathologists defined the main histopathological features (concepts) and their impact on grade characterization. The resulted FCM model consisted of nine concepts. Eight concepts represented the main histopathological features for tumour grading. The ninth concept represented the tumour grade. To increase the classification ability of the FCM model, the AHL algorithm was applied to adjust the weights of the FCM. The proposed FCM grading model achieved a classification accuracy of 72.5%, 74.42% and 95.55% for tumours of grades I, II and III, respectively. An advanced computerized method to support tumour grade diagnosis decision was proposed and developed. The novelty of the method is based on employing the soft computing method of FCMs to represent specialized knowledge on histopathology and on augmenting FCMs ability using an unsupervised learning algorithm, the AHL. The proposed method performs with reasonably high accuracy compared to other existing methods and at the same time meets the physicians' requirements for transparency and explicability.

  6. Electro-encephalogram based brain-computer interface: improved performance by mental practice and concentration skills.

    Science.gov (United States)

    Mahmoudi, Babak; Erfanian, Abbas

    2006-11-01

    Mental imagination is the essential part of the most EEG-based communication systems. Thus, the quality of mental rehearsal, the degree of imagined effort, and mind controllability should have a major effect on the performance of electro-encephalogram (EEG) based brain-computer interface (BCI). It is now well established that mental practice using motor imagery improves motor skills. The effects of mental practice on motor skill learning are the result of practice on central motor programming. According to this view, it seems logical that mental practice should modify the neuronal activity in the primary sensorimotor areas and consequently change the performance of EEG-based BCI. For developing a practical BCI system, recognizing the resting state with eyes opened and the imagined voluntary movement is important. For this purpose, the mind should be able to focus on a single goal for a period of time, without deviation to another context. In this work, we are going to examine the role of mental practice and concentration skills on the EEG control during imaginative hand movements. The results show that the mental practice and concentration can generally improve the classification accuracy of the EEG patterns. It is found that mental training has a significant effect on the classification accuracy over the primary motor cortex and frontal area.

  7. Splitting method for computing coupled hydrodynamic and structural response

    International Nuclear Information System (INIS)

    Ash, J.E.

    1977-01-01

    A numerical method is developed for application to unsteady fluid dynamics problems, in particular to the mechanics following a sudden release of high energy. Solution of the initial compressible flow phase provides input to a power-series method for the incompressible fluid motions. The system is split into spatial and time domains leading to the convergent computation of a sequence of elliptic equations. Two sample problems are solved, the first involving an underwater explosion and the second the response of a nuclear reactor containment shell structure to a hypothetical core accident. The solutions are correlated with experimental data

  8. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  9. Computational methods for planning and evaluating geothermal energy projects

    International Nuclear Information System (INIS)

    Goumas, M.G.; Lygerou, V.A.; Papayannakis, L.E.

    1999-01-01

    In planning, designing and evaluating a geothermal energy project, a number of technical, economic, social and environmental parameters should be considered. The use of computational methods provides a rigorous analysis improving the decision-making process. This article demonstrates the application of decision-making methods developed in operational research for the optimum exploitation of geothermal resources. Two characteristic problems are considered: (1) the economic evaluation of a geothermal energy project under uncertain conditions using a stochastic analysis approach and (2) the evaluation of alternative exploitation schemes for optimum development of a low enthalpy geothermal field using a multicriteria decision-making procedure. (Author)

  10. Impact of Contrast Media Concentration on Low-Kilovolt Computed Tomography Angiography: A Systematic Preclinical Approach.

    Science.gov (United States)

    Fleischmann, Ulrike; Pietsch, Hubertus; Korporaal, Johannes G; Flohr, Thomas G; Uder, Michael; Jost, Gregor; Lell, Michael M

    2018-05-01

    Low peak kilovoltage (kVp) protocols in computed tomography angiography (CTA) demand a review of contrast media (CM) administration practices. The aim of this study was to systematically evaluate different iodine concentrations of CM in a porcine model. Dynamic 70 kVp CTA was performed on 7 pigs using a third-generation dual-source CT system. Three CM injection protocols (A-C) with an identical total iodine dose and iodine delivery rate (150 mg I/kg, 12 s, 0.75 g I/s) differed in iodine concentration and flow rate (protocol A: 400 mg I/mL, 1.9 mL/s; B: 300 mg I/mL, 2.5 mL/s; C: 150 mg I/mL, 5 mL/s). All protocols were applied in a randomized order and compared intraindividually. Arterial enhancement at different locations in the pulmonary artery, the aorta, and aortic branches was measured over time. Time attenuation curves, peak enhancement, time to peak, and bolus tracking delay times needed for static CTA were calculated. The reproducibility of optimal parameters was tested in single-phase CTA. The heart rates of the pigs were comparable for all protocols (P > 0.7). The injection pressure was significantly higher for protocol A (64 ± 5 psi) and protocol C (55 ± 3 psi) compared with protocol B (39 ± 2 psi) (P < 0.001). Average arterial peak enhancement in the dynamic scans was 359 ± 51 HU (protocol A), 382 ± 36 HU (B), and 382 ± 60 HU (C) (A compared with B and C: P < 0.01; B compared with C: P = 0.995). Time to peak enhancement decreased with increasing injection rate. The delay time for bolus tracking depended on the injection rate as well and was highest for protocol A (4.7 seconds) and lowest for protocol C (3.9 seconds) (P = 0.038). The peak enhancement values of the dynamic scans highly correlated with those of the single-phase CTA scans. In 70 kVp CTA, 300 mg I/mL iodine concentrations showed to be superior to high-concentration CM when keeping the iodine delivery rate constant. Besides, iodine concentrations as low as 150 mg I/mL can be administered

  11. Comparative analysis of methods for concentrating venom from jellyfish Rhopilema esculentum Kishinouye

    Science.gov (United States)

    Li, Cuiping; Yu, Huahua; Feng, Jinhua; Chen, Xiaolin; Li, Pengcheng

    2009-02-01

    In this study, several methods were compared for the efficiency to concentrate venom from the tentacles of jellyfish Rhopilema esculentum Kishinouye. The results show that the methods using either freezing-dry or gel absorption to remove water to concentrate venom are not applicable due to the low concentration of the compounds dissolved. Although the recovery efficiency and the total venom obtained using the dialysis dehydration method are high, some proteins can be lost during the concentrating process. Comparing to the lyophilization method, ultrafiltration is a simple way to concentrate the compounds at high percentage but the hemolytic activities of the proteins obtained by ultrafiltration appear to be lower. Our results suggest that overall lyophilization is the best and recommended method to concentrate venom from the tentacles of jellyfish. It shows not only the high recovery efficiency for the venoms but high hemolytic activities as well.

  12. Simulation of Rn-222 decay products concentration deposited on a filter. Description of radon1.pas computer program

    International Nuclear Information System (INIS)

    Machaj, B.

    1996-01-01

    A computer program allowing simulation of activity distribution of 222 Rn short lived decay products deposited on a filter against time is presented, for any radiation equilibrium degree of the decay products. Deposition of the decay products is simulated by summing discrete samples every 1/10 min in the sampling time from 1 to 10 min. The concentration (activity) of the decay products is computed in one minute intervals in the range 1 - 100 min. The alpha concentration and the total activity of 218 Po + 214 Po produced are computed in the range 1 to 100 min as well. (author). 10 refs, 4 figs

  13. Automated uncertainty analysis methods in the FRAP computer codes

    International Nuclear Information System (INIS)

    Peck, S.O.

    1980-01-01

    A user oriented, automated uncertainty analysis capability has been incorporated in the Fuel Rod Analysis Program (FRAP) computer codes. The FRAP codes have been developed for the analysis of Light Water Reactor fuel rod behavior during steady state (FRAPCON) and transient (FRAP-T) conditions as part of the United States Nuclear Regulatory Commission's Water Reactor Safety Research Program. The objective of uncertainty analysis of these codes is to obtain estimates of the uncertainty in computed outputs of the codes is to obtain estimates of the uncertainty in computed outputs of the codes as a function of known uncertainties in input variables. This paper presents the methods used to generate an uncertainty analysis of a large computer code, discusses the assumptions that are made, and shows techniques for testing them. An uncertainty analysis of FRAP-T calculated fuel rod behavior during a hypothetical loss-of-coolant transient is presented as an example and carried through the discussion to illustrate the various concepts

  14. Comparison of different methods for shielding design in computed tomography

    International Nuclear Information System (INIS)

    Ciraj-Bjelac, O.; Arandjic, D.; Kosutic, D.

    2011-01-01

    The purpose of this work is to compare different methods for shielding calculation in computed tomography (CT). The BIR-IPEM (British Inst. of Radiology and Inst. of Physics in Engineering in Medicine) and NCRP (National Council on Radiation Protection) method were used for shielding thickness calculation. Scattered dose levels and calculated barrier thickness were also compared with those obtained by scatter dose measurements in the vicinity of a dedicated CT unit. Minimal requirement for protective barriers based on BIR-IPEM method ranged between 1.1 and 1.4 mm of lead demonstrating underestimation of up to 20 % and overestimation of up to 30 % when compared with thicknesses based on measured dose levels. For NCRP method, calculated thicknesses were 33 % higher (27-42 %). BIR-IPEM methodology-based results were comparable with values based on scattered dose measurements, while results obtained using NCRP methodology demonstrated an overestimation of the minimal required barrier thickness. (authors)

  15. Method and apparatus for detecting dilute concentrations of radioactive xenon in samples of xenon extracted from the atmosphere

    Science.gov (United States)

    Warburton, William K.; Hennig, Wolfgang G.

    2018-01-02

    A method and apparatus for measuring the concentrations of radioxenon isotopes in a gaseous sample wherein the sample cell is surrounded by N sub-detectors that are sensitive to both electrons and to photons from radioxenon decays. Signal processing electronics are provided that can detect events within the sub-detectors, measure their energies, determine whether they arise from electrons or photons, and detect coincidences between events within the same or different sub-detectors. The energies of detected two or three event coincidences are recorded as points in associated two or three-dimensional histograms. Counts within regions of interest in the histograms are then used to compute estimates of the radioxenon isotope concentrations. The method achieves lower backgrounds and lower minimum detectable concentrations by using smaller detector crystals, eliminating interference between double and triple coincidence decay branches, and segregating double coincidences within the same sub-detector from those occurring between different sub-detectors.

  16. Multiscale methods in turbulent combustion: strategies and computational challenges

    International Nuclear Information System (INIS)

    Echekki, Tarek

    2009-01-01

    A principal challenge in modeling turbulent combustion flows is associated with their complex, multiscale nature. Traditional paradigms in the modeling of these flows have attempted to address this nature through different strategies, including exploiting the separation of turbulence and combustion scales and a reduced description of the composition space. The resulting moment-based methods often yield reasonable predictions of flow and reactive scalars' statistics under certain conditions. However, these methods must constantly evolve to address combustion at different regimes, modes or with dominant chemistries. In recent years, alternative multiscale strategies have emerged, which although in part inspired by the traditional approaches, also draw upon basic tools from computational science, applied mathematics and the increasing availability of powerful computational resources. This review presents a general overview of different strategies adopted for multiscale solutions of turbulent combustion flows. Within these strategies, some specific models are discussed or outlined to illustrate their capabilities and underlying assumptions. These strategies may be classified under four different classes, including (i) closure models for atomistic processes, (ii) multigrid and multiresolution strategies, (iii) flame-embedding strategies and (iv) hybrid large-eddy simulation-low-dimensional strategies. A combination of these strategies and models can potentially represent a robust alternative strategy to moment-based models; but a significant challenge remains in the development of computational frameworks for these approaches as well as their underlying theories. (topical review)

  17. Mathematical modellings and computational methods for structural analysis of LMFBR's

    International Nuclear Information System (INIS)

    Liu, W.K.; Lam, D.

    1983-01-01

    In this paper, two aspects of nuclear reactor problems are discussed, modelling techniques and computational methods for large scale linear and nonlinear analyses of LMFBRs. For nonlinear fluid-structure interaction problem with large deformation, arbitrary Lagrangian-Eulerian description is applicable. For certain linear fluid-structure interaction problem, the structural response spectrum can be found via 'added mass' approach. In a sense, the fluid inertia is accounted by a mass matrix added to the structural mass. The fluid/structural modes of certain fluid-structure problem can be uncoupled to get the reduced added mass. The advantage of this approach is that it can account for the many repeated structures of nuclear reactor. In regard to nonlinear dynamic problem, the coupled nonlinear fluid-structure equations usually have to be solved by direct time integration. The computation can be very expensive and time consuming for nonlinear problems. Thus, it is desirable to optimize the accuracy and computation effort by using implicit-explicit mixed time integration method. (orig.)

  18. Collaborative validation of a rapid method for efficient virus concentration in bottled water

    DEFF Research Database (Denmark)

    Schultz, Anna Charlotte; Perelle, Sylvie; Di Pasquale, Simona

    2011-01-01

    . Three newly developed methods, A, B and C, for virus concentration in bottled water were compared against the reference method D: (A) Convective Interaction Media (CIM) monolithic chromatography; filtration of viruses followed by (B) direct lysis of viruses on membrane; (C) concentration of viruses......Enteric viruses, including norovirus (NoV) and hepatitis A virus (HAV), have emerged as a major cause of waterborne outbreaks worldwide. Due to their low infectious doses and low concentrations in water samples, an efficient and rapid virus concentration method is required for routine control...... by ultracentrifugation; and (D) concentration of viruses by ultrafiltration, for each methods' (A, B and C) efficacy to recover 10-fold dilutions of HAV and feline calicivirus (FCV) spiked in bottles of 1.5L of mineral water. Within the tested characteristics, all the new methods showed better performance than method D...

  19. A numerical method to compute interior transmission eigenvalues

    International Nuclear Information System (INIS)

    Kleefeld, Andreas

    2013-01-01

    In this paper the numerical calculation of eigenvalues of the interior transmission problem arising in acoustic scattering for constant contrast in three dimensions is considered. From the computational point of view existing methods are very expensive, and are only able to show the existence of such transmission eigenvalues. Furthermore, they have trouble finding them if two or more eigenvalues are situated closely together. We present a new method based on complex-valued contour integrals and the boundary integral equation method which is able to calculate highly accurate transmission eigenvalues. So far, this is the first paper providing such accurate values for various surfaces different from a sphere in three dimensions. Additionally, the computational cost is even lower than those of existing methods. Furthermore, the algorithm is capable of finding complex-valued eigenvalues for which no numerical results have been reported yet. Until now, the proof of existence of such eigenvalues is still open. Finally, highly accurate eigenvalues of the interior Dirichlet problem are provided and might serve as test cases to check newly derived Faber–Krahn type inequalities for larger transmission eigenvalues that are not yet available. (paper)

  20. A new method for determining the formation energy of a vacancy in concentrated alloys

    International Nuclear Information System (INIS)

    Kinoshita, C.; Kitajima, S.; Eguchi, T.

    1978-01-01

    The disadvantages in the conventional method which determines the formation energy of a vacancy in concentrated alloys from their kinetic behavior during annealing after quenching are pointed out, and an alternative method for overcoming these disadvantages is proposed. (Auth.)

  1. Laboratory Sequence in Computational Methods for Introductory Chemistry

    Science.gov (United States)

    Cody, Jason A.; Wiser, Dawn C.

    2003-07-01

    A four-exercise laboratory sequence for introductory chemistry integrating hands-on, student-centered experience with computer modeling has been designed and implemented. The progression builds from exploration of molecular shapes to intermolecular forces and the impact of those forces on chemical separations made with gas chromatography and distillation. The sequence ends with an exploration of molecular orbitals. The students use the computers as a tool; they build the molecules, submit the calculations, and interpret the results. Because of the construction of the sequence and its placement spanning the semester break, good laboratory notebook practices are reinforced and the continuity of course content and methods between semesters is emphasized. The inclusion of these techniques in the first year of chemistry has had a positive impact on student perceptions and student learning.

  2. An analytical method for computing atomic contact areas in biomolecules.

    Science.gov (United States)

    Mach, Paul; Koehl, Patrice

    2013-01-15

    We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.

  3. An Accurate liver segmentation method using parallel computing algorithm

    International Nuclear Information System (INIS)

    Elbasher, Eiman Mohammed Khalied

    2014-12-01

    Computed Tomography (CT or CAT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones muscles, fat and organs CT scans are more detailed than standard x-rays. CT scans may be done with or without "contrast Contrast refers to a substance taken by mouth and/ or injected into an intravenous (IV) line that causes the particular organ or tissue under study to be seen more clearly. CT scan of the liver and biliary tract are used in the diagnosis of many diseases in the abdomen structures, particularly when another type of examination, such as X-rays, physical examination, and ultra sound is not conclusive. Unfortunately, the presence of noise and artifact in the edges and fine details in the CT images limit the contrast resolution and make diagnostic procedure more difficult. This experimental study was conducted at the College of Medical Radiological Science, Sudan University of Science and Technology and Fidel Specialist Hospital. The sample of study was included 50 patients. The main objective of this research was to study an accurate liver segmentation method using a parallel computing algorithm, and to segment liver and adjacent organs using image processing technique. The main technique of segmentation used in this study was watershed transform. The scope of image processing and analysis applied to medical application is to improve the quality of the acquired image and extract quantitative information from medical image data in an efficient and accurate way. The results of this technique agreed wit the results of Jarritt et al, (2010), Kratchwil et al, (2010), Jover et al, (2011), Yomamoto et al, (1996), Cai et al (1999), Saudha and Jayashree (2010) who used different segmentation filtering based on the methods of enhancing the computed tomography images. Anther

  4. Estimating spatio-temporal dynamics of stream total phosphate concentration by soft computing techniques.

    Science.gov (United States)

    Chang, Fi-John; Chen, Pin-An; Chang, Li-Chiu; Tsai, Yu-Hsuan

    2016-08-15

    This study attempts to model the spatio-temporal dynamics of total phosphate (TP) concentrations along a river for effective hydro-environmental management. We propose a systematical modeling scheme (SMS), which is an ingenious modeling process equipped with a dynamic neural network and three refined statistical methods, for reliably predicting the TP concentrations along a river simultaneously. Two different types of artificial neural network (BPNN-static neural network; NARX network-dynamic neural network) are constructed in modeling the dynamic system. The Dahan River in Taiwan is used as a study case, where ten-year seasonal water quality data collected at seven monitoring stations along the river are used for model training and validation. Results demonstrate that the NARX network can suitably capture the important dynamic features and remarkably outperforms the BPNN model, and the SMS can effectively identify key input factors, suitably overcome data scarcity, significantly increase model reliability, satisfactorily estimate site-specific TP concentration at seven monitoring stations simultaneously, and adequately reconstruct seasonal TP data into a monthly scale. The proposed SMS can reliably model the dynamic spatio-temporal water pollution variation in a river system for missing, hazardous or costly data of interest. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Optical efficiency of solar concentrators by a reverse optical path method.

    Science.gov (United States)

    Parretta, A; Antonini, A; Milan, E; Stefancich, M; Martinelli, G; Armani, M

    2008-09-15

    A method for the optical characterization of a solar concentrator, based on the reverse illumination by a Lambertian source and measurement of intensity of light projected on a far screen, has been developed. It is shown that the projected light intensity is simply correlated to the angle-resolved efficiency of a concentrator, conventionally obtained by a direct illumination procedure. The method has been applied by simulating simple reflective nonimaging and Fresnel lens concentrators.

  6. A discrete ordinate response matrix method for massively parallel computers

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Lewis, E.E.

    1991-01-01

    A discrete ordinate response matrix method is formulated for the solution of neutron transport problems on massively parallel computers. The response matrix formulation eliminates iteration on the scattering source. The nodal matrices which result from the diamond-differenced equations are utilized in a factored form which minimizes memory requirements and significantly reduces the required number of algorithm utilizes massive parallelism by assigning each spatial node to a processor. The algorithm is accelerated effectively by a synthetic method in which the low-order diffusion equations are also solved by massively parallel red/black iterations. The method has been implemented on a 16k Connection Machine-2, and S 8 and S 16 solutions have been obtained for fixed-source benchmark problems in X--Y geometry

  7. Review methods for image segmentation from computed tomography images

    International Nuclear Information System (INIS)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

    2014-01-01

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan

  8. A computer method for simulating the decay of radon daughters

    International Nuclear Information System (INIS)

    Hartley, B.M.

    1988-01-01

    The analytical equations representing the decay of a series of radioactive atoms through a number of daughter products are well known. These equations are for an idealized case in which the expectation value of the number of atoms which decay in a certain time can be represented by a smooth curve. The real curve of the total number of disintegrations from a radioactive species consists of a series of Heaviside step functions, with the steps occurring at the time of the disintegration. The disintegration of radioactive atoms is said to be random but this random behaviour is such that a single species forms an ensemble of which the times of disintegration give a geometric distribution. Numbers which have a geometric distribution can be generated by computer and can be used to simulate the decay of one or more radioactive species. A computer method is described for simulating such decay of radioactive atoms and this method is applied specifically to the decay of the short half life daughters of radon 222 and the emission of alpha particles from polonium 218 and polonium 214. Repeating the simulation of the decay a number of times provides a method for investigating the statistical uncertainty inherent in methods for measurement of exposure to radon daughters. This statistical uncertainty is difficult to investigate analytically since the time of decay of an atom of polonium 218 is not independent of the time of decay of subsequent polonium 214. The method is currently being used to investigate the statistical uncertainties of a number of commonly used methods for the counting of alpha particles from radon daughters and the calculations of exposure

  9. Measurements of natural uranium concentration in Caspian Sea and Persian Gulf water by laser fluorimetric method

    International Nuclear Information System (INIS)

    Garshasbi, H.; Karimi Diba, J.; Jahanbakhshian, M. H.; Asghari, S. K.; Heravi, G. H.

    2005-01-01

    Natural uranium exists in earth crust and seawater. The concentration of uranium might increase by human manipulation or geological changes. The aim of this study was to verify susceptibility of laser fluorimetry method to determine the uranium concentration in Caspian Sea and Persian Gulf water. Materials and Methods: Laser fluorimetric method was used to determine the uranium concentration in several samples prepared from Caspian Sea and Persian Gulf water. Biological and chemical substances were eliminated in samples for better evaluation of the method. Results: As the concentration of natural uranium in samples increases, the response of instrument (uranium analyzer) increases accordingly. The standard deviation also increased slightly and gradually. Conclusion: Results indicate that the laser fluorimetry method show a reliable and accurate response with uranium concentration up to 100 μg/L in samples after removal of biological and organic substances

  10. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    Science.gov (United States)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  11. A new computational method for reactive power market clearing

    International Nuclear Information System (INIS)

    Zhang, T.; Elkasrawy, A.; Venkatesh, B.

    2009-01-01

    After deregulation of electricity markets, ancillary services such as reactive power supply are priced separately. However, unlike real power supply, procedures for costing and pricing reactive power supply are still evolving and spot markets for reactive power do not exist as of now. Further, traditional formulations proposed for clearing reactive power markets use a non-linear mixed integer programming formulation that are difficult to solve. This paper proposes a new reactive power supply market clearing scheme. Novelty of this formulation lies in the pricing scheme that rewards transformers for tap shifting while participating in this market. The proposed model is a non-linear mixed integer challenge. A significant portion of the manuscript is devoted towards the development of a new successive mixed integer linear programming (MILP) technique to solve this formulation. The successive MILP method is computationally robust and fast. The IEEE 6-bus and 300-bus systems are used to test the proposed method. These tests serve to demonstrate computational speed and rigor of the proposed method. (author)

  12. Empirical method for simulation of water tables by digital computers

    International Nuclear Information System (INIS)

    Carnahan, C.L.; Fenske, P.R.

    1975-09-01

    An empirical method is described for computing a matrix of water-table elevations from a matrix of topographic elevations and a set of observed water-elevation control points which may be distributed randomly over the area of interest. The method is applicable to regions, such as the Great Basin, where the water table can be assumed to conform to a subdued image of overlying topography. A first approximation to the water table is computed by smoothing a matrix of topographic elevations and adjusting each node of the smoothed matrix according to a linear regression between observed water elevations and smoothed topographic elevations. Each observed control point is assumed to exert a radially decreasing influence on the first approximation surface. The first approximation is then adjusted further to conform to observed water-table elevations near control points. Outside the domain of control, the first approximation is assumed to represent the most probable configuration of the water table. The method has been applied to the Nevada Test Site and the Hot Creek Valley areas in Nevada

  13. A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data

    Science.gov (United States)

    Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.

    2011-01-01

    A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.

  14. Computer codes and methods for simulating accelerator driven systems

    International Nuclear Information System (INIS)

    Sartori, E.; Byung Chan Na

    2003-01-01

    A large set of computer codes and associated data libraries have been developed by nuclear research and industry over the past half century. A large number of them are in the public domain and can be obtained under agreed conditions from different Information Centres. The areas covered comprise: basic nuclear data and models, reactor spectra and cell calculations, static and dynamic reactor analysis, criticality, radiation shielding, dosimetry and material damage, fuel behaviour, safety and hazard analysis, heat conduction and fluid flow in reactor systems, spent fuel and waste management (handling, transportation, and storage), economics of fuel cycles, impact on the environment of nuclear activities etc. These codes and models have been developed mostly for critical systems used for research or power generation and other technological applications. Many of them have not been designed for accelerator driven systems (ADS), but with competent use, they can be used for studying such systems or can form the basis for adapting existing methods to the specific needs of ADS's. The present paper describes the types of methods, codes and associated data available and their role in the applications. It provides Web addresses for facilitating searches for such tools. Some indications are given on the effect of non appropriate or 'blind' use of existing tools to ADS. Reference is made to available experimental data that can be used for validating the methods use. Finally, some international activities linked to the different computational aspects are described briefly. (author)

  15. Methodics of computing the results of monitoring the exploratory gallery

    Directory of Open Access Journals (Sweden)

    Krúpa Víazoslav

    2000-09-01

    Full Text Available At building site of motorway tunnel Višòové-Dubná skala , the priority is given to driving of exploration galley that secures in detail: geologic, engineering geology, hydrogeology and geotechnics research. This research is based on gathering information for a supposed use of the full profile driving machine that would drive the motorway tunnel. From a part of the exploration gallery which is driven by the TBM method, a fulfilling information is gathered about the parameters of the driving process , those are gathered by a computer monitoring system. The system is mounted on a driving machine. This monitoring system is based on the industrial computer PC 104. It records 4 basic values of the driving process: the electromotor performance of the driving machine Voest-Alpine ATB 35HA, the speed of driving advance, the rotation speed of the disintegrating head TBM and the total head pressure. The pressure force is evaluated from the pressure in the hydraulic cylinders of the machine. Out of these values, the strength of rock mass, the angle of inner friction, etc. are mathematically calculated. These values characterize rock mass properties as their changes. To define the effectivity of the driving process, the value of specific energy and the working ability of driving head is used. The article defines the methodics of computing the gathered monitoring information, that is prepared for the driving machine Voest – Alpine ATB 35H at the Institute of Geotechnics SAS. It describes the input forms (protocols of the developed method created by an EXCEL program and shows selected samples of the graphical elaboration of the first monitoring results obtained from exploratory gallery driving process in the Višòové – Dubná skala motorway tunnel.

  16. Description of a method for computing fluid-structure interaction

    International Nuclear Information System (INIS)

    Gantenbein, F.

    1982-02-01

    A general formulation allowing computation of structure vibrations in a dense fluid is described. It is based on fluid modelisation by fluid finite elements. For each fluid node are associated two variables: the pressure p and a variable π defined as p=d 2 π/dt 2 . Coupling between structure and fluid is introduced by surface elements. This method is easy to introduce in a general finite element code. Validation was obtained by analytical calculus and tests. It is widely used for vibrational and seismic studies of pipes and internals of nuclear reactors some applications are presented [fr

  17. Computer Aided Flowsheet Design using Group Contribution Methods

    DEFF Research Database (Denmark)

    Bommareddy, Susilpa; Eden, Mario R.; Gani, Rafiqul

    2011-01-01

    In this paper, a systematic group contribution based framework is presented for synthesis of process flowsheets from a given set of input and output specifications. Analogous to the group contribution methods developed for molecular design, the framework employs process groups to represent...... information of each flowsheet to minimize the computational load and information storage. The design variables for the selected flowsheet(s) are identified through a reverse simulation approach and are used as initial estimates for rigorous simulation to verify the feasibility and performance of the design....

  18. COMPUTER-IMPLEMENTED METHOD OF PERFORMING A SEARCH USING SIGNATURES

    DEFF Research Database (Denmark)

    2017-01-01

    A computer-implemented method of processing a query vector and a data vector), comprising: generating a set of masks and a first set of multiple signatures and a second set of multiple signatures by applying the set of masks to the query vector and the data vector, respectively, and generating...... candidate pairs, of a first signature and a second signature, by identifying matches of a first signature and a second signature. The set of masks comprises a configuration of the elements that is a Hadamard code; a permutation of a Hadamard code; or a code that deviates from a Hadamard code...

  19. Method and apparatus for managing transactions with connected computers

    Science.gov (United States)

    Goldsmith, Steven Y.; Phillips, Laurence R.; Spires, Shannon V.

    2003-01-01

    The present invention provides a method and apparatus that make use of existing computer and communication resources and that reduce the errors and delays common to complex transactions such as international shipping. The present invention comprises an agent-based collaborative work environment that assists geographically distributed commercial and government users in the management of complex transactions such as the transshipment of goods across the U.S.-Mexico border. Software agents can mediate the creation, validation and secure sharing of shipment information and regulatory documentation over the Internet, using the World-Wide Web to interface with human users.

  20. Numerical methods and computers used in elastohydrodynamic lubrication

    Science.gov (United States)

    Hamrock, B. J.; Tripp, J. H.

    1982-01-01

    Some of the methods of obtaining approximate numerical solutions to boundary value problems that arise in elastohydrodynamic lubrication are reviewed. The highlights of four general approaches (direct, inverse, quasi-inverse, and Newton-Raphson) are sketched. Advantages and disadvantages of these approaches are presented along with a flow chart showing some of the details of each. The basic question of numerical stability of the elastohydrodynamic lubrication solutions, especially in the pressure spike region, is considered. Computers used to solve this important class of lubrication problems are briefly described, with emphasis on supercomputers.

  1. A hybrid method for the parallel computation of Green's functions

    International Nuclear Information System (INIS)

    Petersen, Dan Erik; Li Song; Stokbro, Kurt; Sorensen, Hans Henrik B.; Hansen, Per Christian; Skelboe, Stig; Darve, Eric

    2009-01-01

    Quantum transport models for nanodevices using the non-equilibrium Green's function method require the repeated calculation of the block tridiagonal part of the Green's and lesser Green's function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only require computing a small number of entries of the inverse matrix. Then, we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size.

  2. A method for estimation of plasma albumin concentration from the buffering properties of whole blood

    DEFF Research Database (Denmark)

    Rees, Stephen Edward; Diemer, Tue; Kristensen, Søren Risom

    2012-01-01

    measurements of acid-base and oxygenation status. This article presents and evaluates a new method for doing so. MATERIALS AND METHODS: The mathematical method for estimating plasma albumin concentration is described. To evaluate the method at numerous albumin concentrations, blood from 19 healthy subjects......PURPOSE: Hypoalbuminemia is strongly associated with poor clinical outcome. Albumin is usually measured at the central laboratory rather than point of care, but in principle, information exists in the buffering properties of whole blood to estimate plasma albumin concentration from point of care...

  3. Multigrid Methods for the Computation of Propagators in Gauge Fields

    Science.gov (United States)

    Kalkreuter, Thomas

    Multigrid methods were invented for the solution of discretized partial differential equations in order to overcome the slowness of traditional algorithms by updates on various length scales. In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. Gauge fields are incorporated in algorithms in a covariant way. The kernel C of the restriction operator which averages from one grid to the next coarser grid is defined by projection on the ground-state of a local Hamiltonian. The idea behind this definition is that the appropriate notion of smoothness depends on the dynamics. The ground-state projection choice of C can be used in arbitrary dimension and for arbitrary gauge group. We discuss proper averaging operations for bosons and for staggered fermions. The kernels C can also be used in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies. Actual numerical computations are performed in four-dimensional SU(2) gauge fields. We prove that our proposals for block spins are “good”, using renormalization group arguments. A central result is that the multigrid method works in arbitrarily disordered gauge fields, in principle. It is proved that computations of propagators in gauge fields without critical slowing down are possible when one uses an ideal interpolation kernel. Unfortunately, the idealized algorithm is not practical, but it was important to answer questions of principle. Practical methods are able to outperform the conjugate gradient algorithm in case of bosons. The case of staggered fermions is harder. Multigrid methods give considerable speed-ups compared to conventional relaxation algorithms, but on lattices up to 184 conjugate gradient is superior.

  4. Comparison of four methods to assess colostral IgG concentration in dairy cows.

    Science.gov (United States)

    Chigerwe, Munashe; Tyler, Jeff W; Middleton, John R; Spain, James N; Dill, Jeffrey S; Steevens, Barry J

    2008-09-01

    To determine sensitivity and specificity of 4 methods to assess colostral IgG concentration in dairy cows and determine the optimal cutpoint for each method. Cross-sectional study. 160 Holstein dairy cows. 171 composite colostrum samples collected within 2 hours after parturition were used in the study. Test methods used to estimate colostral IgG concentration consisted of weight of the first milking, 2 hydrometers, and an electronic refractometer. Results of the test methods were compared with colostral IgG concentration determined by means of radial immunodiffusion. For each method, sensitivity and specificity for detecting colostral IgG concentration hydrometer 1, 0.75; hydrometer 2, 0.76; refractometer, 0.75), but no significant differences were identified among the other 3 methods with regard to sensitivity. Specificities at the optimal cutpoint were similar for all 4 methods. Results suggested that use of either hydrometer or the electronic refractometer was an acceptable method of screening colostrum for low IgG concentration; however, the manufacturer-defined scale for both hydrometers overestimated colostral IgG concentration. Use of weight of the first milking as a screening test to identify bovine colostrum with inadequate IgG concentration could not be justified because of the low sensitivity.

  5. Assessment of the cryoprotectant concentration inside a bulky organ for cryopreservation using X-ray computed tomography.

    Science.gov (United States)

    Corral, Ariadna; Balcerzyk, Marcin; Parrado-Gallego, Ángel; Fernández-Gómez, Isabel; Lamprea, David R; Olmo, Alberto; Risco, Ramón

    2015-12-01

    Cryoprotection of bulky organs is crucial for their storage and for subsequent transplantation. In this work we demonstrate the capability of the X-ray computed tomography (CT) as a non-invasive method to measure the cryoprotectant (cpa) concentration inside a tissue or an organ, specifically for the case of dymethil sulfoxide (Me2SO). It is remarkable that the use of Me2SO has been leader in techniques of cells and tissues cryopreservation. Although CT technologies are mainly based in density differences, and many cpas are alcohols with densities similar to water, the use of very low energies as acceleration voltage (∼70 kV) and the sulfur atom in the molecule of Me2SO makes possible the visualization of this cpa inside tissues. As result we obtain a CT signal proportional to the Me2SO concentration with a spatial resolution up to 50 μm in the case of our device. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Fluid history computation methods for reactor safeguards problems using MNODE computer program

    International Nuclear Information System (INIS)

    Huang, Y.S.; Savery, C.W.

    1976-10-01

    A method for predicting the pressure-temperature histories of air, water liquid, and vapor flowing in a zoned containment as a result of high energy pipe rupture is described. The computer code, MNODE, has been developed for 12 connected control volumes and 24 inertia flow paths. Predictions by the code are compared with the results of an analytical gas dynamic problem, semiscale blowdown experiments, full scale MARVIKEN test results, Battelle-Frankfurt model PWR containment test data. The MNODE solutions to NRC/AEC subcompartment benchmark problems are also compared with results predicted by other computer codes such as RELAP-3, FLASH-2, CONTEMPT-PS. The analytical consideration is consistent with Section 6.2.1.2 of the Standard Format (Rev. 2) issued by U.S. Nuclear Regulatory Commission in September 1975

  7. Oligomerization of G protein-coupled receptors: computational methods.

    Science.gov (United States)

    Selent, J; Kaczor, A A

    2011-01-01

    Recent research has unveiled the complexity of mechanisms involved in G protein-coupled receptor (GPCR) functioning in which receptor dimerization/oligomerization may play an important role. Although the first high-resolution X-ray structure for a likely functional chemokine receptor dimer has been deposited in the Protein Data Bank, the interactions and mechanisms of dimer formation are not yet fully understood. In this respect, computational methods play a key role for predicting accurate GPCR complexes. This review outlines computational approaches focusing on sequence- and structure-based methodologies as well as discusses their advantages and limitations. Sequence-based approaches that search for possible protein-protein interfaces in GPCR complexes have been applied with success in several studies, but did not yield always consistent results. Structure-based methodologies are a potent complement to sequence-based approaches. For instance, protein-protein docking is a valuable method especially when guided by experimental constraints. Some disadvantages like limited receptor flexibility and non-consideration of the membrane environment have to be taken into account. Molecular dynamics simulation can overcome these drawbacks giving a detailed description of conformational changes in a native-like membrane. Successful prediction of GPCR complexes using computational approaches combined with experimental efforts may help to understand the role of dimeric/oligomeric GPCR complexes for fine-tuning receptor signaling. Moreover, since such GPCR complexes have attracted interest as potential drug target for diverse diseases, unveiling molecular determinants of dimerization/oligomerization can provide important implications for drug discovery.

  8. Computing thermal Wigner densities with the phase integration method

    International Nuclear Information System (INIS)

    Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.

    2014-01-01

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems

  9. Computing thermal Wigner densities with the phase integration method.

    Science.gov (United States)

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  10. Computational methods for ab initio detection of microRNAs

    Directory of Open Access Journals (Sweden)

    Malik eYousef

    2012-10-01

    Full Text Available MicroRNAs are small RNA sequences of 18-24 nucleotides in length, which serve as templates to drive post transcriptional gene silencing. The canonical microRNA pathway starts with transcription from DNA and is followed by processing via the Microprocessor complex, yielding a hairpin structure. Which is then exported into the cytosol where it is processed by Dicer and then incorporated into the RNA induced silencing complex. All of these biogenesis steps add to the overall specificity of miRNA production and effect. Unfortunately, their modes of action are just beginning to be elucidated and therefore computational prediction algorithms cannot model the process but are usually forced to employ machine learning approaches. This work focuses on ab initio prediction methods throughout; and therefore homology-based miRNA detection methods are not discussed. Current ab initio prediction algorithms, their ties to data mining, and their prediction accuracy are detailed.

  11. Data graphing methods, articles of manufacture, and computing devices

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Pak Chung; Mackey, Patrick S.; Cook, Kristin A.; Foote, Harlan P.; Whiting, Mark A.

    2016-12-13

    Data graphing methods, articles of manufacture, and computing devices are described. In one aspect, a method includes accessing a data set, displaying a graphical representation including data of the data set which is arranged according to a first of different hierarchical levels, wherein the first hierarchical level represents the data at a first of a plurality of different resolutions which respectively correspond to respective ones of the hierarchical levels, selecting a portion of the graphical representation wherein the data of the portion is arranged according to the first hierarchical level at the first resolution, modifying the graphical representation by arranging the data of the portion according to a second of the hierarchal levels at a second of the resolutions, and after the modifying, displaying the graphical representation wherein the data of the portion is arranged according to the second hierarchal level at the second resolution.

  12. A finite element solution method for quadrics parallel computer

    International Nuclear Information System (INIS)

    Zucchini, A.

    1996-08-01

    A distributed preconditioned conjugate gradient method for finite element analysis has been developed and implemented on a parallel SIMD Quadrics computer. The main characteristic of the method is that it does not require any actual assembling of all element equations in a global system. The physical domain of the problem is partitioned in cells of n p finite elements and each cell element is assigned to a different node of an n p -processors machine. Element stiffness matrices are stored in the data memory of the assigned processing node and the solution process is completely executed in parallel at element level. Inter-element and therefore inter-processor communications are required once per iteration to perform local sums of vector quantities between neighbouring elements. A prototype implementation has been tested on an 8-nodes Quadrics machine in a simple 2D benchmark problem

  13. A novel dual energy method for enhanced quantitative computed tomography

    Science.gov (United States)

    Emami, A.; Ghadiri, H.; Rahmim, A.; Ay, M. R.

    2018-01-01

    Accurate assessment of bone mineral density (BMD) is critically important in clinical practice, and conveniently enabled via quantitative computed tomography (QCT). Meanwhile, dual-energy QCT (DEQCT) enables enhanced detection of small changes in BMD relative to single-energy QCT (SEQCT). In the present study, we aimed to investigate the accuracy of QCT methods, with particular emphasis on a new dual-energy approach, in comparison to single-energy and conventional dual-energy techniques. We used a sinogram-based analytical CT simulator to model the complete chain of CT data acquisitions, and assessed performance of SEQCT and different DEQCT techniques in quantification of BMD. We demonstrate a 120% reduction in error when using a proposed dual-energy Simultaneous Equation by Constrained Least-squares method, enabling more accurate bone mineral measurements.

  14. Comparison of four computational methods for computing Q factors and resonance wavelengths in photonic crystal membrane cavities

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn; Burger, Sven

    2016-01-01

    We benchmark four state-of-the-art computational methods by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities.The convergence of the methods with respect to resolution, degrees of freedom and number ofmodes is investigated. Special att...... attention is paid to the influence of the size of the computational domain. Convergence is not obtained for some of the methods, indicating that some are moresuitable than others for analyzing line defect cavities....

  15. Parallel computation of multigroup reactivity coefficient using iterative method

    Science.gov (United States)

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-01

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  16. Comparison of VIDAS and Radioimmunoassay Methods for Measurement of Cortisol Concentration in Bovine Serum

    Directory of Open Access Journals (Sweden)

    Daniela Proverbio

    2013-01-01

    Full Text Available Radioimmunoassay (RIA is the “gold standard” method for evaluation of serum cortisol concentration. The VIDAS cortisol test is an enzyme-linked fluorescent assay designed for the MiniVidas system. The aim of this study was to compare the VIDAS method with RIA for measurement of bovine serum cortisol concentration. Cortisol concentrations were evaluated in 40 cows using both VIDAS and RIA methods, the latter as the reference method. A paired Student’s -test, Pearson’s correlation analysis, Bland-Altman plot, and Deming regression analysis were used to compare the two methods. There was no statistically significant difference between mean serum cortisol concentrations measured by VIDAS or RIA methods (. Both methods were able to detect significant differences in mean low and high cortisol concentrations ( RIA and VIDAS. The correlation coefficient was low, but a Bland-Altman plot and Deming regression analysis show neither constant nor proportional error. The VIDAS method produced slightly higher values than RIA, but the difference was small and in no case did the mean value move the normal range. Results suggest that VIDAS method is suitable for the determination of bovine serum cortisol concentration in studies of large numbers of animals.

  17. Non-Aqueous Titration Method for Determining Suppressor Concentration in the MCU Next Generation Solvent (NGS)

    Energy Technology Data Exchange (ETDEWEB)

    Taylor-Pashow, Kathryn M. L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, Daniel H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-10-23

    A non-aqueous titration method has been used for quantifying the suppressor concentration in the MCU solvent hold tank (SHT) monthly samples since the Next Generation Solvent (NGS) was implemented in 2013. The titration method measures the concentration of the NGS suppressor (TiDG) as well as the residual tri-n-octylamine (TOA) that is a carryover from the previous solvent. As the TOA concentration has decreased over time, it has become difficult to resolve the TiDG equivalence point as the TOA equivalence point has moved closer. In recent samples, the TiDG equivalence point could not be resolved, and therefore, the TiDG concentration was determined by subtracting the TOA concentration as measured by semi-volatile organic analysis (SVOA) from the total base concentration as measured by titration. In order to improve the titration method so that the TiDG concentration can be measured directly, without the need for the SVOA data, a new method has been developed that involves spiking of the sample with additional TOA to further separate the two equivalence points in the titration. This method has been demonstrated on four recent SHT samples and comparison to results obtained using the SVOA TOA subtraction method shows good agreement. Therefore, it is recommended that the titration procedure be revised to include the TOA spike addition, and this to become the primary method for quantifying the TiDG.

  18. Non-Aqueous Titration Method for Determining Suppressor Concentration in the MCU Next Generation Solvent (NGS)

    International Nuclear Information System (INIS)

    Taylor-Pashow, Kathryn M. L.; Jones, Daniel H.

    2017-01-01

    A non-aqueous titration method has been used for quantifying the suppressor concentration in the MCU solvent hold tank (SHT) monthly samples since the Next Generation Solvent (NGS) was implemented in 2013. The titration method measures the concentration of the NGS suppressor (TiDG) as well as the residual tri-n-octylamine (TOA) that is a carryover from the previous solvent. As the TOA concentration has decreased over time, it has become difficult to resolve the TiDG equivalence point as the TOA equivalence point has moved closer. In recent samples, the TiDG equivalence point could not be resolved, and therefore, the TiDG concentration was determined by subtracting the TOA concentration as measured by semi-volatile organic analysis (SVOA) from the total base concentration as measured by titration. In order to improve the titration method so that the TiDG concentration can be measured directly, without the need for the SVOA data, a new method has been developed that involves spiking of the sample with additional TOA to further separate the two equivalence points in the titration. This method has been demonstrated on four recent SHT samples and comparison to results obtained using the SVOA TOA subtraction method shows good agreement. Therefore, it is recommended that the titration procedure be revised to include the TOA spike addition, and this to become the primary method for quantifying the TiDG.

  19. Modeling of the Critical Micelle Concentration (CMC) of Nonionic Surfactants with an Extended Group-Contribution Method

    DEFF Research Database (Denmark)

    Mattei, Michele; Kontogeorgis, Georgios; Gani, Rafiqul

    2013-01-01

    , those compounds that exhibit larger correlation errors (based only on first- and second-order groups) are assigned to more detailed molecular descriptions, so that better correlations of critical micelle concentrations are obtained. The group parameter estimation has been performed using a data set......A group-contribution (GC) property prediction model for estimating the critical micelle concentration (CMC) of nonionic surfactants in water at 25 °C is presented. The model is based on the Marrero and Gani GC method. A systematic analysis of the model performance against experimental data...... concentration, and in particular, the quantitative structure−property relationship models, the developed GC model provides an accurate correlation and allows for an easier and faster application in computer-aided molecular design techniques facilitating chemical process and product design....

  20. Hamming method for solving the delayed neutron precursor concentration for reactivity calculation

    International Nuclear Information System (INIS)

    Díaz, Daniel Suescún; Ospina, Juan Felipe Flórez; Sarasty, Jesús Andrés Rodríguez

    2012-01-01

    Highlights: ► We present a new formulation to calculate the reactivity using the Hamming method. ► This method shows better accuracy than existing methods for reactivity calculation. ► The reactivity is calculated without limitation of the nuclear power form. ► The method can be implemented in reactivity meters with time step of up to 0.1 s. - Abstract: We propose a new method for numerically solving the inverse point kinetic equation for a nuclear reactor using the Hamming method, without requiring the nuclear power history and without using the Laplace transform. This new method converges with accuracy of order h 5 , where h is the step in the computation time. The procedure is validated for different forms of the nuclear power and with different time steps. The results indicate that this method has a better accuracy and lower computational effort compared with other conventional methods that use the nuclear power history.

  1. Computations of concentration of radon and its decay products against time. Computer program; Obliczanie koncentracji radonu i jego produktow rozpadu w funkcji czasu. Program komputerowy

    Energy Technology Data Exchange (ETDEWEB)

    Machaj, B. [Institute of Nuclear Chemistry and Technology, Warsaw (Poland)

    1996-12-31

    This research is aimed to develop a device for continuous monitoring of radon in the air, by measuring alpha activity of radon and its short lived decay products. The influence of alpha activity variation of radon and its daughters on the measured results is of importance and requires a knowledge of this variation with time. Employing the measurement of alpha radiation of radon and of its short lived decay products, require knowledge of radon concentration variation and its decay products against the time. A computer program in Turbo Pascal language was therefore developed performing the computations employing the known relations involved, the program being adapted for IBM PC computers. The presented program enables computation of activity of {sup 222}Rn and its daughter products: {sup 218}Po, {sup 214}Pb, {sup 214}Bi and {sup 214}Po every 1 min within the period of 0-255 min for any state of radiation equilibrium between the radon and its daughter products. The program permits also to compute alpha activity of {sup 222}Rn + {sup 218}Po + {sup 214}Po against time and the total alpha activity at selected interval of time. The results of computations are stored on the computer hard disk in ASCII format and are used a graphic program e.g. by DrawPerfect program to make diagrams. Equations employed for computation of the alpha activity of radon and its decay products as well as the description of program functions are given. (author). 2 refs, 4 figs.

  2. Description of a double centrifugation tube method for concentrating canine platelets

    OpenAIRE

    Perazzi, Anna; Busetto, Roberto; Martinello, Tiziana; Drigo, Michele; Pasotto, Daniela; Cian, Francesco; Patruno, Marco; Iacopetti, Ilaria

    2013-01-01

    Background To evaluate the efficiency of platelet-rich plasma preparations by means of a double centrifugation tube method to obtain platelet-rich canine plasma at a concentration at least 4 times higher than the baseline value and a concentration of white blood cells not exceeding twice the reference range. A complete blood count was carried out for each sample and each concentrate. Whole blood samples were collected from 12 clinically healthy dogs (consenting blood donors). Blood was proces...

  3. A nonparametric approach to calculate critical micelle concentrations: the local polynomial regression method

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Fontan, J.L.; Costa, J.; Ruso, J.M.; Prieto, G. [Dept. of Applied Physics, Univ. of Santiago de Compostela, Santiago de Compostela (Spain); Sarmiento, F. [Dept. of Mathematics, Faculty of Informatics, Univ. of A Coruna, A Coruna (Spain)

    2004-02-01

    The application of a statistical method, the local polynomial regression method, (LPRM), based on a nonparametric estimation of the regression function to determine the critical micelle concentration (cmc) is presented. The method is extremely flexible because it does not impose any parametric model on the subjacent structure of the data but rather allows the data to speak for themselves. Good concordance of cmc values with those obtained by other methods was found for systems in which the variation of a measured physical property with concentration showed an abrupt change. When this variation was slow, discrepancies between the values obtained by LPRM and others methods were found. (orig.)

  4. Plasma concentrations of blood coagulation factor VII measured by immunochemical and amidolytic methods

    DEFF Research Database (Denmark)

    Bladbjerg, E-M; Gram, J; Jespersen, J

    2000-01-01

    Ever since the coagulant activity of blood coagulation factor VII (FVII:C) was identified as a risk indicator of cardiac death, a large number of studies have measured FVII protein concentrations in plasma. FVII protein concentrations are either measured immunologically with an ELISA method (FVII...

  5. Application of Computational Methods in Planaria Research: A Current Update

    Directory of Open Access Journals (Sweden)

    Ghosh Shyamasree

    2017-07-01

    Full Text Available Planaria is a member of the Phylum Platyhelminthes including flatworms. Planarians possess the unique ability of regeneration from adult stem cells or neoblasts and finds importance as a model organism for regeneration and developmental studies. Although research is being actively carried out globally through conventional methods to understand the process of regeneration from neoblasts, biology of development, neurobiology and immunology of Planaria, there are many thought provoking questions related to stem cell plasticity, and uniqueness of regenerative potential in Planarians amongst other members of Phylum Platyhelminthes. The complexity of receptors and signalling mechanisms, immune system network, biology of repair, responses to injury are yet to be understood in Planaria. Genomic and transcriptomic studies have generated a vast repository of data, but their availability and analysis is a challenging task. Data mining, computational approaches of gene curation, bioinformatics tools for analysis of transcriptomic data, designing of databases, application of algorithms in deciphering changes of morphology by RNA interference (RNAi approaches, understanding regeneration experiments is a new venture in Planaria research that is helping researchers across the globe in understanding the biology. We highlight the applications of Hidden Markov models (HMMs in designing of computational tools and their applications in Planaria decoding their complex biology.

  6. Comparing concentration methods: parasitrap® versus Kato-Katz for ...

    African Journals Online (AJOL)

    Comparing concentration methods: parasitrap® versus Kato-Katz for studying the prevalence of Helminths in Bengo province, Angola. Clara Mirante, Isabel Clemente, Graciette Zambu, Catarina Alexandre, Teresa Ganga, Carlos Mayer, Miguel Brito ...

  7. Evaluation and Comparison of Chemiluminescence and UV Photometric Methods for Measuring Ozone Concentrations in Ambient Air

    Science.gov (United States)

    The current Federal Reference Method (FRM) for measuring concentrations of ozone in ambient air is based on the dry, gas-phase, chemiluminescence reaction between ethylene (C2H4) and any ozone (O3) that may be p...

  8. Passive sampling methods for contaminated sediments: Scientific rationale supporting use of freely dissolved concentrations

    DEFF Research Database (Denmark)

    Mayer, Philipp; Parkerton, Thomas F.; Adams, Rachel G.

    2014-01-01

    Passive sampling methods (PSMs) allow the quantification of the freely dissolved concentration (Cfree ) of an organic contaminant even in complex matrices such as sediments. Cfree is directly related to a contaminant's chemical activity, which drives spontaneous processes including diffusive upta...

  9. Software Defects, Scientific Computation and the Scientific Method

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    Computation has rapidly grown in the last 50 years so that in many scientific areas it is the dominant partner in the practice of science. Unfortunately, unlike the experimental sciences, it does not adhere well to the principles of the scientific method as espoused by, for example, the philosopher Karl Popper. Such principles are built around the notions of deniability and reproducibility. Although much research effort has been spent on measuring the density of software defects, much less has been spent on the more difficult problem of measuring their effect on the output of a program. This talk explores these issues with numerous examples suggesting how this situation might be improved to match the demands of modern science. Finally it develops a theoretical model based on an amalgam of statistical mechanics and Hartley/Shannon information theory which suggests that software systems have strong implementation independent behaviour and supports the widely observed phenomenon that defects clust...

  10. Computation of Hemagglutinin Free Energy Difference by the Confinement Method

    Science.gov (United States)

    2017-01-01

    Hemagglutinin (HA) mediates membrane fusion, a crucial step during influenza virus cell entry. How many HAs are needed for this process is still subject to debate. To aid in this discussion, the confinement free energy method was used to calculate the conformational free energy difference between the extended intermediate and postfusion state of HA. Special care was taken to comply with the general guidelines for free energy calculations, thereby obtaining convergence and demonstrating reliability of the results. The energy that one HA trimer contributes to fusion was found to be 34.2 ± 3.4kBT, similar to the known contributions from other fusion proteins. Although computationally expensive, the technique used is a promising tool for the further energetic characterization of fusion protein mechanisms. Knowledge of the energetic contributions per protein, and of conserved residues that are crucial for fusion, aids in the development of fusion inhibitors for antiviral drugs. PMID:29151344

  11. Conference on Boundary and Interior Layers : Computational and Asymptotic Methods

    CERN Document Server

    Stynes, Martin; Zhang, Zhimin

    2017-01-01

    This volume collects papers associated with lectures that were presented at the BAIL 2016 conference, which was held from 14 to 19 August 2016 at Beijing Computational Science Research Center and Tsinghua University in Beijing, China. It showcases the variety and quality of current research into numerical and asymptotic methods for theoretical and practical problems whose solutions involve layer phenomena. The BAIL (Boundary And Interior Layers) conferences, held usually in even-numbered years, bring together mathematicians and engineers/physicists whose research involves layer phenomena, with the aim of promoting interaction between these often-separate disciplines. These layers appear as solutions of singularly perturbed differential equations of various types, and are common in physical problems, most notably in fluid dynamics. This book is of interest for current researchers from mathematics, engineering and physics whose work involves the accurate app roximation of solutions of singularly perturbed diffe...

  12. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-01-01

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community

  13. Statistical physics and computational methods for evolutionary game theory

    CERN Document Server

    Javarone, Marco Alberto

    2018-01-01

    This book presents an introduction to Evolutionary Game Theory (EGT) which is an emerging field in the area of complex systems attracting the attention of researchers from disparate scientific communities. EGT allows one to represent and study several complex phenomena, such as the emergence of cooperation in social systems, the role of conformity in shaping the equilibrium of a population, and the dynamics in biological and ecological systems. Since EGT models belong to the area of complex systems, statistical physics constitutes a fundamental ingredient for investigating their behavior. At the same time, the complexity of some EGT models, such as those realized by means of agent-based methods, often require the implementation of numerical simulations. Therefore, beyond providing an introduction to EGT, this book gives a brief overview of the main statistical physics tools (such as phase transitions and the Ising model) and computational strategies for simulating evolutionary games (such as Monte Carlo algor...

  14. Activation method for measuring the neutron spectra parameters. Computer software

    International Nuclear Information System (INIS)

    Efimov, B.V.; Ionov, V.S.; Konyaev, S.I.; Marin, S.V.

    2005-01-01

    The description of mathematical statement of a task for definition the spectral characteristics of neutron fields with use developed in RRC KI unified activation detectors (UKD) is resulted. The method of processing of results offered by authors activation measurements and calculation of the parameters used for an estimation of the neutron spectra characteristics is discussed. Features of processing of the experimental data received at measurements of activation with using UKD are considered. Activation detectors UKD contain a little bit specially the picked up isotopes giving at irradiation peaks scale of activity in the common spectrum scale of activity. Computing processing of results of the measurements is applied on definition of spectrum parameters for nuclear reactor installations with thermal and close to such power spectrum of neutrons. The example of the data processing, the measurements received at carrying out at RRC KI research reactor F-1 is resulted [ru

  15. A computed microtomography method for understanding epiphyseal growth plate fusion

    Science.gov (United States)

    Staines, Katherine A.; Madi, Kamel; Javaheri, Behzad; Lee, Peter D.; Pitsillides, Andrew A.

    2017-12-01

    The epiphyseal growth plate is a developmental region responsible for linear bone growth, in which chondrocytes undertake a tightly regulated series of biological processes. Concomitant with the cessation of growth and sexual maturation, the human growth plate undergoes progressive narrowing, and ultimately disappears. Despite the crucial role of this growth plate fusion ‘bridging’ event, the precise mechanisms by which it is governed are complex and yet to be established. Progress is likely hindered by the current methods for growth plate visualisation; these are invasive and largely rely on histological procedures. Here we describe our non-invasive method utilising synchrotron x-ray computed microtomography for the examination of growth plate bridging, which ultimately leads to its closure coincident with termination of further longitudinal bone growth. We then apply this method to a dataset obtained from a benchtop microcomputed tomography scanner to highlight its potential for wide usage. Furthermore, we conduct finite element modelling at the micron-scale to reveal the effects of growth plate bridging on local tissue mechanics. Employment of these 3D analyses of growth plate bone bridging is likely to advance our understanding of the physiological mechanisms that control growth plate fusion.

  16. Methods and computer codes for probabilistic sensitivity and uncertainty analysis

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1985-01-01

    This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables

  17. Emerging Computational Methods for the Rational Discovery of Allosteric Drugs.

    Science.gov (United States)

    Wagner, Jeffrey R; Lee, Christopher T; Durrant, Jacob D; Malmstrom, Robert D; Feher, Victoria A; Amaro, Rommie E

    2016-06-08

    Allosteric drug development holds promise for delivering medicines that are more selective and less toxic than those that target orthosteric sites. To date, the discovery of allosteric binding sites and lead compounds has been mostly serendipitous, achieved through high-throughput screening. Over the past decade, structural data has become more readily available for larger protein systems and more membrane protein classes (e.g., GPCRs and ion channels), which are common allosteric drug targets. In parallel, improved simulation methods now provide better atomistic understanding of the protein dynamics and cooperative motions that are critical to allosteric mechanisms. As a result of these advances, the field of predictive allosteric drug development is now on the cusp of a new era of rational structure-based computational methods. Here, we review algorithms that predict allosteric sites based on sequence data and molecular dynamics simulations, describe tools that assess the druggability of these pockets, and discuss how Markov state models and topology analyses provide insight into the relationship between protein dynamics and allosteric drug binding. In each section, we first provide an overview of the various method classes before describing relevant algorithms and software packages.

  18. Computation of rectangular source integral by rational parameter polynomial method

    International Nuclear Information System (INIS)

    Prabha, Hem

    2001-01-01

    Hubbell et al. (J. Res. Nat Bureau Standards 64C, (1960) 121) have obtained a series expansion for the calculation of the radiation field generated by a plane isotropic rectangular source (plaque), in which leading term is the integral H(a,b). In this paper another integral I(a,b), which is related with the integral H(a,b) has been solved by the rational parameter polynomial method. From I(a,b), we compute H(a,b). Using this method the integral I(a,b) is expressed in the form of a polynomial of a rational parameter. Generally, a function f (x) is expressed in terms of x. In this method this is expressed in terms of x/(1+x). In this way, the accuracy of the expression is good over a wide range of x as compared to the earlier approach. The results for I(a,b) and H(a,b) are given for a sixth degree polynomial and are found to be in good agreement with the results obtained by numerically integrating the integral. Accuracy could be increased either by increasing the degree of the polynomial or by dividing the range of integration. The results of H(a,b) and I(a,b) are given for values of b and a up to 2.0 and 20.0, respectively

  19. Quality assessment of platelet concentrates prepared by platelet rich plasma-platelet concentrate, buffy coat poor-platelet concentrate (BC-PC and apheresis-PC methods

    Directory of Open Access Journals (Sweden)

    Singh Ravindra

    2009-01-01

    Full Text Available Background: Platelet rich plasma-platelet concentrate (PRP-PC, buffy coat poor-platelet concentrate (BC-PC, and apheresis-PC were prepared and their quality parameters were assessed. Study Design: In this study, the following platelet products were prepared: from random donor platelets (i platelet rich plasma - platelet concentrate (PRP-PC, and (ii buffy coat poor- platelet concentrate (BC-PC and (iii single donor platelets (apheresis-PC by different methods. Their quality was assessed using the following parameters: swirling, volume of the platelet concentrate, platelet count, WBC count and pH. Results: A total of 146 platelet concentrates (64 of PRP-PC, 62 of BC-PC and 20 of apheresis-PC were enrolled in this study. The mean volume of PRP-PC, BC-PC and apheresis-PC was 62.30±22.68 ml, 68.81±22.95 ml and 214.05±9.91 ml and ranged from 22-135 ml, 32-133 ml and 200-251 ml respectively. The mean platelet count of PRP-PC, BC-PC and apheresis-PC was 7.6±2.97 x 1010/unit, 7.3±2.98 x 1010/unit and 4.13±1.32 x 1011/unit and ranged from 3.2-16.2 x 1010/unit, 0.6-16.4 x 1010/unit and 1.22-8.9 x 1011/unit respectively. The mean WBC count in PRP-PC (n = 10, BC-PC (n = 10 and apheresis-PC (n = 6 units was 4.05±0.48 x 107/unit, 2.08±0.39 x 107/unit and 4.8±0.8 x 106/unit and ranged from 3.4 -4.77 x 107/unit, 1.6-2.7 x 107/unit and 3.2 - 5.2 x 106/unit respectively. A total of 26 units were analyzed for pH changes. Out of these units, 10 each were PRP-PC and BC-PC and 6 units were apheresis-PC. Their mean pH was 6.7±0.26 (mean±SD and ranged from 6.5 - 7.0 and no difference was observed among all three types of platelet concentrate. Conclusion: PRP-PC and BC-PC units were comparable in terms of swirling, platelet count per unit and pH. As expected, we found WBC contamination to be less in BC-PC than PRP-PC units. Variation in volume was more in BC-PC than PRP-PC units and this suggests that further standardization is required for preparation of BC

  20. Chromatographic method of measurement of helium concentration in underground waters for dating in hydrological questions

    International Nuclear Information System (INIS)

    Najman, J.

    2008-04-01

    Research methods which use natural environmental indicators are widely applied in hydrology. Different concentrations of indicators and their isotopic components in ground waters allow to determine the genesis of waters and are valuable source of information about the water flow dynamics. One of the significant indicator is helium. The concentration of 4 He (helium) in ground water is a fine indicator in water dating in a range from a hundreds to millions of years (Aeschbach-Hertig i in., 1999; Andrews i in., 1989; Castro i in., 2000; Zuber i in., 2007). 4 He is also used for dating young waters of age about 10 years (Solomon i in., 1996). Thesis consist the description of elaborated in IFJ PAN in Krakow chromatographic measurement method of helium concentration in ground waters in aim of dating. Chapter 1 contain short introduction about ground water dating and chapter 2 description of helium property and chosen applications of helium for example in technology and earthquake predictions. Helium sources in ground waters are described in chapter 3. Helium concentration in water after infiltration (originated from atmosphere) to the ground water system depends mainly on the helium concentration coming from the equilibration with the atmosphere increased by additional concentration from '' excess air ''. With the increasing resistance time of ground water during the flow, radiogenic, non-atmospheric component of helium dissolves also in water. In chapter 4 two measurement methods of helium concentration in ground waters were introduced: mass spectrometric and gas chromatographic method. Detailed description of elaborated chromatographic measurement method of helium concentration in ground water contain chapter 5. To verify developed method the concentration of helium in ground waters from the regions of Krakow and Busko Zdroj were measured. For this waters the concentrations of helium are known from the earlier mass spectrometric measurements. The results of

  1. Fixed Nadir Focus Concentrated Solar Power Applying Reflective Array Tracking Method

    Science.gov (United States)

    Setiawan, B.; DAMayanti, A. M.; Murdani, A.; Habibi, I. I. A.; Wakidah, R. N.

    2018-04-01

    The Sun is one of the most potential renewable energy develoPMent to be utilized, one of its utilization is for solar thermal concentrators, CSP (Concentrated Solar Power). In CSP energy conversion, the concentrator is as moving the object by tracking the sunlight to reach the focus point. This method need quite energy consumption, because the unit of the concentrators has considerable weight, and use large CSP, means the existence of the usage unit will appear to be wider and heavier. The addition of weight and width of the unit will increase the torque to drive the concentrator and hold the wind gusts. One method to reduce energy consumption is direct the sunlight by the reflective array to nadir through CSP with Reflective Fresnel Lens concentrator. The focus will be below the nadir direction, and the position of concentrator will be fixed position even the angle of the sun’s elevation changes from morning to afternoon. So, the energy concentrated maximally, because it has been protected from wind gusts. And then, the possibility of dAMage and changes in focus construction will not occur. The research study and simulation of the reflective array (mechanical method) will show the reflective angle movement. The distance between reflectors and their angle are controlled by mechatronics. From the simulation using fresnel 1m2, and efficiency of solar energy is 60.88%. In restriction, the intensity of sunlight at the tropical circles 1KW/peak, from 6 AM until 6 PM.

  2. Continuous measurement of the radon concentration in water using electret ion chamber method

    International Nuclear Information System (INIS)

    Dua, S.K.; Hopke, P.K.

    1992-10-01

    A radon concentration of 300 pCi/L has been proposed by the US Environmental Protection Agency as a limit for radon dissolved in municipal drinking water supplies. There is therefore a need for a continuous monitor to insure that the daily average concentration does not exceed this limit. In order to calibrate the system, varying concentrations of radon in water have been generated by bubbling radon laden air through a dynamic flowthrough water system. The value of steady state concentration of radon in water from this system depends on the concentration of radon in air, the air bubbling rate, and the water flow rate. The measurement system has been designed and tested using a 1 L volume electret ion chamber to determine the radon in water. In this dynamic method, water flows directly through the electret ion chamber. Radon is released to the air and measured with the electret. A flow of air is maintained through the chamber to prevent the build-up of high radon concentrations and too rapid discharge of the electret. It was found that the system worked well when the air flow was induced by the application of suction. The concentration in the water was calculated from the measured concentration in air and water and air flow rates. Preliminary results suggest that the method has sufficient sensitivity to measure concentrations of radon in water with acceptable accuracy and precision

  3. Evaluation of single and double centrifugation tube methods for concentrating equine platelets.

    Science.gov (United States)

    Argüelles, D; Carmona, J U; Pastor, J; Iborra, A; Viñals, L; Martínez, P; Bach, E; Prades, M

    2006-10-01

    The aim of this study was to evaluate single and double centrifugation tube methods for concentrating equine platelets. Whole blood samples were collected from clinically normal horses and processed by use of single and double centrifugation tube methods to obtain four platelet concentrates (PCs): PC-A, PC-B, PC-C, and PC-D, which were analyzed using a flow cytometry hematology system for hemogram and additional platelet parameters (mean platelet volume, platelet distribution width, mean platelet component concentration, mean platelet component distribution width). Concentrations of transforming growth factor beta 1 (TGF-beta(1)) were determined in all the samples. Platelet concentrations for PC-A, PC-B, PC-C, and PC-D were 45%, 44%, 71%, and 21% higher, respectively, compared to the same values for citrated whole blood samples. TGF-beta(1) concentrations for PC-A, PC-B, PC-C, and PC-D were 38%, 44%, 44%, and 37% higher, respectively, compared to citrated whole blood sample values. In conclusion, the single and double centrifugation tube methods are reliable methods for concentrating equine platelets and for obtaining potentially therapeutic TGF-beta(1) levels.

  4. Analysis of 210Pb and 210Po concentrations in surface air by an α spectrometric method

    International Nuclear Information System (INIS)

    Winkler, R.; Hoetzl, H.; Chatterjee, B.

    1981-01-01

    A method is presented for determining the concentrations of airborne 210 Pb and 210 Po. The method employs α spectrometry to measure the count rate of 210 Po present on an electrostatic filter sample at two post-sampling times. The individual air concentrations of 210 Po and 210 Pb can be calculated from equations given. Sensitivity of the procedure is about 0.2 fCi 210 Po per m 3 of air. The method was applied to the study of long-term variations and frequency distributions of 210 Po and 210 Pb concentrations in surface air at a nonpolluted location about 10 km outside of Munich, F.R.G., from 1976 through 1979. During this period the average concentration levels were found to be 14.2 fCi 210 Pb per m 3 of air and 0.77 fCi 210 Po per m 3 of air, respectively. (author)

  5. A fast iterative method for computing particle beams penetrating matter

    International Nuclear Information System (INIS)

    Boergers, C.

    1997-01-01

    Beams of microscopic particles penetrating matter are important in several fields. The application motivating our parameter choices in this paper is electron beam cancer therapy. Mathematically, a steady particle beam penetrating matter, or a configuration of several such beams, is modeled by a boundary value problem for a Boltzmann equation. Grid-based discretization of this problem leads to a system of algebraic equations. This system is typically very large because of the large number of independent variables in the Boltzmann equation (six if time independence is the only dimension-reducing assumption). If grid-based methods are to be practical at all, it is therefore necessary to develop fast solvers for the discretized problems. This is the subject of the present paper. For two-dimensional, mono-energetic, linear particle beam problems, we describe an iterative domain decomposition algorithm based on overlapping decompositions of the set of particle directions and computationally demonstrate its rapid, grid independent convergence. There appears to be no fundamental obstacle to generalizing the method to three-dimensional, energy dependent problems. 34 refs., 15 figs., 6 tabs

  6. Global Seabed Materials and Habitats Mapped: The Computational Methods

    Science.gov (United States)

    Jenkins, C. J.

    2016-02-01

    What the seabed is made of has proven difficult to map on the scale of whole ocean-basins. Direct sampling and observation can be augmented with proxy-parameter methods such as acoustics. Both avenues are essential to obtain enough detail and coverage, and also to validate the mapping methods. We focus on the direct observations such as samplings, photo and video, probes, diver and sub reports, and surveyed features. These are often in word-descriptive form: over 85% of the records for site materials are in this form, whether as sample/view descriptions or classifications, or described parameters such as consolidation, color, odor, structures and components. Descriptions are absolutely necessary for unusual materials and for processes - in other words, for research. This project dbSEABED not only has the largest collection of seafloor materials data worldwide, but it uses advanced computing math to obtain the best possible coverages and detail. Included in those techniques are linguistic text analysis (e.g., Natural Language Processing, NLP), fuzzy set theory (FST), and machine learning (ML, e.g., Random Forest). These techniques allow efficient and accurate import of huge datasets, thereby optimizing the data that exists. They merge quantitative and qualitative types of data for rich parameter sets, and extrapolate where the data are sparse for best map production. The dbSEABED data resources are now very widely used worldwide in oceanographic research, environmental management, the geosciences, engineering and survey.

  7. Semi-coarsening multigrid methods for parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    Jones, J.E.

    1996-12-31

    Standard multigrid methods are not well suited for problems with anisotropic coefficients which can occur, for example, on grids that are stretched to resolve a boundary layer. There are several different modifications of the standard multigrid algorithm that yield efficient methods for anisotropic problems. In the paper, we investigate the parallel performance of these multigrid algorithms. Multigrid algorithms which work well for anisotropic problems are based on line relaxation and/or semi-coarsening. In semi-coarsening multigrid algorithms a grid is coarsened in only one of the coordinate directions unlike standard or full-coarsening multigrid algorithms where a grid is coarsened in each of the coordinate directions. When both semi-coarsening and line relaxation are used, the resulting multigrid algorithm is robust and automatic in that it requires no knowledge of the nature of the anisotropy. This is the basic multigrid algorithm whose parallel performance we investigate in the paper. The algorithm is currently being implemented on an IBM SP2 and its performance is being analyzed. In addition to looking at the parallel performance of the basic semi-coarsening algorithm, we present algorithmic modifications with potentially better parallel efficiency. One modification reduces the amount of computational work done in relaxation at the expense of using multiple coarse grids. This modification is also being implemented with the aim of comparing its performance to that of the basic semi-coarsening algorithm.

  8. Particular application of methods of AdaBoost and LBP to the problems of computer vision

    OpenAIRE

    Волошин, Микола Володимирович

    2012-01-01

    The application of AdaBoost method and local binary pattern (LBP) method for different spheres of computer vision implementation, such as personality identification and computer iridology, is considered in the article. The goal of the research is to develop error-correcting methods and systems for implements of computer vision and computer iridology, in particular. This article considers the problem of colour spaces, which are used as a filter and as a pre-processing of images. Method of AdaB...

  9. Non-unitary probabilistic quantum computing circuit and method

    Science.gov (United States)

    Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)

    2009-01-01

    A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.

  10. Nuclear power reactor analysis, methods, algorithms and computer programs

    International Nuclear Information System (INIS)

    Matausek, M.V

    1981-01-01

    Full text: For a developing country buying its first nuclear power plants from a foreign supplier, disregarding the type and scope of the contract, there is a certain number of activities which have to be performed by local stuff and domestic organizations. This particularly applies to the choice of the nuclear fuel cycle strategy and the choice of the type and size of the reactors, to bid parameters specification, bid evaluation and final safety analysis report evaluation, as well as to in-core fuel management activities. In the Nuclear Engineering Department of the Boris Kidric Institute of Nuclear Sciences (NET IBK) the continual work is going on, related to the following topics: cross section and resonance integral calculations, spectrum calculations, generation of group constants, lattice and cell problems, criticality and global power distribution search, fuel burnup analysis, in-core fuel management procedures, cost analysis and power plant economics, safety and accident analysis, shielding problems and environmental impact studies, etc. The present paper gives the details of the methods developed and the results achieved, with the particular emphasis on the NET IBK computer program package for the needs of planning, construction and operation of nuclear power plants. The main problems encountered so far were related to small working team, lack of large and powerful computers, absence of reliable basic nuclear data and shortage of experimental and empirical results for testing theoretical models. Some of these difficulties have been overcome thanks to bilateral and multilateral cooperation with developed countries, mostly through IAEA. It is the authors opinion, however, that mutual cooperation of developing countries, having similar problems and similar goals, could lead to significant results. Some activities of this kind are suggested and discussed. (author)

  11. Justification of computational methods to ensure information management systems

    Directory of Open Access Journals (Sweden)

    E. D. Chertov

    2016-01-01

    Full Text Available Summary. Due to the diversity and complexity of organizational management tasks a large enterprise, the construction of an information management system requires the establishment of interconnected complexes of means, implementing the most efficient way collect, transfer, accumulation and processing of information necessary drivers handle different ranks in the governance process. The main trends of the construction of integrated logistics management information systems can be considered: the creation of integrated data processing systems by centralizing storage and processing of data arrays; organization of computer systems to realize the time-sharing; aggregate-block principle of the integrated logistics; Use a wide range of peripheral devices with the unification of information and hardware communication. Main attention is paid to the application of the system of research of complex technical support, in particular, the definition of quality criteria for the operation of technical complex, the development of information base analysis methods of management information systems and define the requirements for technical means, as well as methods of structural synthesis of the major subsystems of integrated logistics. Thus, the aim is to study on the basis of systematic approach of integrated logistics management information system and the development of a number of methods of analysis and synthesis of complex logistics that are suitable for use in the practice of engineering systems design. The objective function of the complex logistics management information systems is the task of gathering systems, transmission and processing of specified amounts of information in the regulated time intervals with the required degree of accuracy while minimizing the reduced costs for the establishment and operation of technical complex. Achieving the objective function of the complex logistics to carry out certain organization of interaction of information

  12. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of the...

  13. Standard Test Method for Electrical Performance of Concentrator Terrestrial Photovoltaic Modules and Systems Under Natural Sunlight

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method covers the determination of the electrical performance of photovoltaic concentrator modules and systems under natural sunlight using a normal incidence pyrheliometer. 1.2 The test method is limited to module assemblies and systems where the geometric concentration ratio specified by the manufacturer is greater than 5. 1.3 This test method applies to concentrators that use passive cooling where the cell temperature is related to the air temperature. 1.4 Measurements under a variety of conditions are allowed; results are reported under a select set of concentrator reporting conditions to facilitate comparison of results. 1.5 This test method applies only to concentrator terrestrial modules and systems. 1.6 This test method assumes that the module or system electrical performance characteristics do not change during the period of test. 1.7 The performance rating determined by this test method applies only at the period of the test, and implies no past or future performance level. 1.8...

  14. Use of digital computers for correction of gamma method and neutron-gamma method indications

    International Nuclear Information System (INIS)

    Lakhnyuk, V.M.

    1978-01-01

    The program for the NAIRI-S computer is described which is intended for accounting and elimination of the effect of by-processes when interpreting gamma and neutron-gamma logging indications. By means of slight corrections it is possible to use the program as a mathematical basis for logging diagram standardization by the method of multidimensional regressive analysis and estimation of rock reservoir properties

  15. Direct concentration and viability measurement of yeast in corn mash using a novel imaging cytometry method.

    Science.gov (United States)

    Chan, Leo L; Lyettefi, Emily J; Pirani, Alnoor; Smith, Tim; Qiu, Jean; Lin, Bo

    2011-08-01

    Worldwide awareness of fossil-fuel depletion and global warming has been increasing over the last 30 years. Numerous countries, including the USA and Brazil, have introduced large-scale industrial fermentation facilities for bioethanol, biobutanol, or biodiesel production. Most of these biofuel facilities perform fermentation using standard baker's yeasts that ferment sugar present in corn mash, sugar cane, or other glucose media. In research and development in the biofuel industry, selection of yeast strains (for higher ethanol tolerance) and fermentation conditions (yeast concentration, temperature, pH, nutrients, etc.) can be studied to optimize fermentation performance. Yeast viability measurement is needed to identify higher ethanol-tolerant yeast strains, which may prolong the fermentation cycle and increase biofuel output. In addition, yeast concentration may be optimized to improve fermentation performance. Therefore, it is important to develop a simple method for concentration and viability measurement of fermenting yeast. In this work, we demonstrate an imaging cytometry method for concentration and viability measurements of yeast in corn mash directly from operating fermenters. It employs an automated cell counter, a dilution buffer, and staining solution from Nexcelom Bioscience to perform enumeration. The proposed method enables specific fluorescence detection of viable and nonviable yeasts, which can generate precise results for concentration and viability of yeast in corn mash. This method can provide an essential tool for research and development in the biofuel industry and may be incorporated into manufacturing to monitor yeast concentration and viability efficiently during the fermentation process.

  16. Description of a double centrifugation tube method for concentrating canine platelets.

    Science.gov (United States)

    Perazzi, Anna; Busetto, Roberto; Martinello, Tiziana; Drigo, Michele; Pasotto, Daniela; Cian, Francesco; Patruno, Marco; Iacopetti, Ilaria

    2013-07-22

    To evaluate the efficiency of platelet-rich plasma preparations by means of a double centrifugation tube method to obtain platelet-rich canine plasma at a concentration at least 4 times higher than the baseline value and a concentration of white blood cells not exceeding twice the reference range. A complete blood count was carried out for each sample and each concentrate. Whole blood samples were collected from 12 clinically healthy dogs (consenting blood donors). Blood was processed by a double centrifugation tube method to obtain platelet concentrates, which were then analyzed by a flow cytometry haematology system for haemogram. Platelet concentration and white blood cell count were determined in all samples. Platelet concentration at least 4 times higher than the baseline value and a white blood cell count not exceeding twice the reference range were obtained respectively in 10 cases out of 12 (83.3%) and 11 cases out of 12 (91.6%). This double centrifugation tube method is a relatively simple and inexpensive method for obtaining platelet-rich canine plasma, potentially available for therapeutic use to improve the healing process.

  17. Comparison of interpolation methods for sparse data: Application to wind and concentration fields

    International Nuclear Information System (INIS)

    Goodin, W.R.; McRae, G.J.; Seinfield, J.H.

    1979-01-01

    in order to produce gridded fields of pollutant concentration data and surface wind data for use in an air quality model, a number of techniques for interpolating sparse data values are compared. The techniques are compared using three data sets. One is an idealized concentration distribution to which the exact solution is known, the second is a potential flow field, while the third consists of surface ozone concentrations measured in the Los Angeles Basin on a particular day. The results of the study indicate that fitting a second-degree polynomial to each subregion (triangle) in the plane with each data point weighted according to its distance form the subregion provides a good compromise between accuracy and computational cost

  18. Methods to homogenize electrochemical concentration cell (ECC ozonesonde measurements across changes in sensing solution concentration or ozonesonde manufacturer

    Directory of Open Access Journals (Sweden)

    T. Deshler

    2017-06-01

    ≥  30 hPa, and thus the transfer function can be characterized as a simple ratio of the less sensitive to the more sensitive method. This ratio is 0.96 for both solution concentration and ozonesonde type. The ratios differ at pressures < 30 hPa such that OZ0. 5%/OZ1. 0 % =  0. 90 + 0. 041 ⋅ log10(p and OZSciencePump/OZENSCI =  0. 764 + 0. 133 ⋅ log10(p for p in units of hPa. For the manufacturer-recommended solution concentrations the dispersion of the ratio (SP-1.0 / EN-0.5 %, while significant, is generally within 3 % and centered near 1.0, such that no changes are recommended. For stations which have used multiple ozonesonde types with solution concentrations different from the WMO's and manufacturer's recommendations, this work suggests that a reasonably homogeneous data set can be created if the quantitative relationships specified above are applied to the non-standard measurements. This result is illustrated here in an application to the Nairobi data set.

  19. Computation of dilution discharge and mean concentration of effluents within a tidal estuary

    Digital Repository Service at National Institute of Oceanography (India)

    DineshKumar, P.K.

    . ) is maxi mum during July (177.62 m"/sec ) and minimum du ring October (29.01 m'/sec). The corresponding mean concentrations of effluent near the discharge point is 0.01 and 006 ppt. However, it should be recalled that only the average concentrations...

  20. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  1. Recent advances in computational structural reliability analysis methods

    Science.gov (United States)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  2. Computational Methods for Physical Model Information Management: Opening the Aperture

    International Nuclear Information System (INIS)

    Moser, F.; Kirgoeze, R.; Gagne, D.; Calle, D.; Murray, J.; Crowley, J.

    2015-01-01

    The volume, velocity and diversity of data available to analysts are growing exponentially, increasing the demands on analysts to stay abreast of developments in their areas of investigation. In parallel to the growth in data, technologies have been developed to efficiently process, store, and effectively extract information suitable for the development of a knowledge base capable of supporting inferential (decision logic) reasoning over semantic spaces. These technologies and methodologies, in effect, allow for automated discovery and mapping of information to specific steps in the Physical Model (Safeguard's standard reference of the Nuclear Fuel Cycle). This paper will describe and demonstrate an integrated service under development at the IAEA that utilizes machine learning techniques, computational natural language models, Bayesian methods and semantic/ontological reasoning capabilities to process large volumes of (streaming) information and associate relevant, discovered information to the appropriate process step in the Physical Model. The paper will detail how this capability will consume open source and controlled information sources and be integrated with other capabilities within the analysis environment, and provide the basis for a semantic knowledge base suitable for hosting future mission focused applications. (author)

  3. ORION: a computer code for evaluating environmental concentrations and dose equivalent to human organs or tissue from airborne radionuclides

    International Nuclear Information System (INIS)

    Shinohara, K.; Nomura, T.; Iwai, M.

    1983-05-01

    The computer code ORION has been developed to evaluate the environmental concentrations and the dose equivalent to human organs or tissue from air-borne radionuclides released from multiple nuclear installations. The modified Gaussian plume model is applied to calculate the dispersion of the radionuclide. Gravitational settling, dry deposition, precipitation scavenging and radioactive decay are considered to be the causes of depletion and deposition on the ground or on vegetation. ORION is written in the FORTRAN IV language and can be run on IBM 360, 370, 303X, 43XX and FACOM M-series computers. 8 references, 6 tables

  4. THE METHOD OF DESIGNING ASSISTED ON COMPUTER OF THE

    Directory of Open Access Journals (Sweden)

    LUCA Cornelia

    2015-05-01

    Full Text Available To the base of the footwear soles designing, is the shoe last. The shoe lasts have irregular shapes, with various curves witch can’t be represented by a simple mathematic function. In order to design the footwear’s soles it’s necessary to take from the shoe last some base contours. These contours are obtained with high precision in a 3D CAD system. In the paper, it will be presented a method of designing of the soles for footwear, computer assisted. The copying process of the shoe last is done using the 3D digitizer. For digitizing, the shoe last spatial shape is positioned on the peripheral of data gathering, witch follows automatically the shoe last’s surface. The wire network obtained through digitizing is numerically interpolated with the interpolator functions in order to obtain the spatial numerical shape of the shoe last. The 3D designing of the sole will be realized on the numerical shape of the shoe last following the next steps: the manufacture of the sole’s surface, the lateral surface realization of the sole’s shape, obtaining the link surface between the lateral side and the planner one of the sole, of the sole’s margin, the sole’s designing contains the skid proof area. The main advantage of the designing method is the design precision, visualization in 3D space of the sole and the possibility to take the best decision viewing the acceptance of new sole’s pattern.

  5. Method agreement between measuring of boar sperm concentration using Makler chamber and photometer

    OpenAIRE

    Mrkun Janko; Kosec M.; Zakošek Maja; Zrimšek Petra

    2007-01-01

    Determination of boar sperm concentration using a photometer is used routinely by many artificial insemination (AI) laboratories. The agreement between determining sperm concentration using Makler chamber and a photometer has been assessed. Method agreement was evaluated on the basis of scatter plots with Deming regression line, absolute bias plots with limits of agreement, and relative bias plots. Coefficients of variance for the Makler chamber and a photometer were calculated as 6.575+3.461...

  6. A method of paralleling computer calculation for two-dimensional kinetic plasma model

    International Nuclear Information System (INIS)

    Brazhnik, V.A.; Demchenko, V.V.; Dem'yanov, V.G.; D'yakov, V.E.; Ol'shanskij, V.V.; Panchenko, V.I.

    1987-01-01

    A method for parallel computer calculation and OSIRIS program complex realizing it and designed for numerical plasma simulation by the macroparticle method are described. The calculation can be carried out either with one or simultaneously with two computers BESM-6, that is provided by some package of interacting programs functioning in every computer. Program interaction in every computer is based on event techniques realized in OS DISPAK. Parallel computer calculation with two BESM-6 computers allows to accelerate the computation 1.5 times

  7. Method of estimating changes in vapor concentrations continuously generated from two-component organic solvents.

    Science.gov (United States)

    Hori, Hajime; Ishidao, Toru; Ishimatsu, Sumiyo

    2010-12-01

    We measured vapor concentrations continuously evaporated from two-component organic solvents in a reservoir and proposed a method to estimate and predict the evaporation rate or generated vapor concentrations. Two kinds of organic solvents were put into a small reservoir made of glass (3 cm in diameter and 3 cm high) that was installed in a cylindrical glass vessel (10 cm in diameter and 15 cm high). Air was introduced into the glass vessel at a flow rate of 150 ml/min, and the generated vapor concentrations were intermittently monitored for up to 5 hours with a gas chromatograph equipped with a flame ionization detector. The solvent systems tested in this study were the methanoltoluene system and the ethyl acetate-toluene system. The vapor concentrations of the more volatile component, that is, methanol in the methanol-toluene system and ethyl acetate in the ethyl acetate-toluene system, were high at first, and then decreased with time. On the other hand, the concentrations of the less volatile component were low at first, and then increased with time. A model for estimating multicomponent organic vapor concentrations was developed, based on a theory of vapor-liquid equilibria and a theory of the mass transfer rate, and estimated values were compared with experimental ones. The estimated vapor concentrations were in relatively good agreement with the experimental ones. The results suggest that changes in concentrations of two-component organic vapors continuously evaporating from a liquid reservoir can be estimated by the proposed model.

  8. A new method for rapid determination of carbohydrate and total carbon concentrations using UV spectrophotometry.

    Science.gov (United States)

    Albalasmeh, Ammar A; Berhe, Asmeret Asefaw; Ghezzehei, Teamrat A

    2013-09-12

    A new UV spectrophotometry based method for determining the concentration and carbon content of carbohydrate solution was developed. This method depends on the inherent UV absorption potential of hydrolysis byproducts of carbohydrates formed by reaction with concentrated sulfuric acid (furfural derivatives). The proposed method is a major improvement over the widely used Phenol-Sulfuric Acid method developed by DuBois, Gilles, Hamilton, Rebers, and Smith (1956). In the old method, furfural is allowed to develop color by reaction with phenol and its concentration is detected by visible light absorption. Here we present a method that eliminates the coloration step and avoids the health and environmental hazards associated with phenol use. In addition, avoidance of this step was shown to improve measurement accuracy while significantly reducing waiting time prior to light absorption reading. The carbohydrates for which concentrations and carbon content can be reliably estimated with this new rapid Sulfuric Acid-UV technique include: monosaccharides, disaccharides and polysaccharides with very high molecular weight. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Overview of Computer Simulation Modeling Approaches and Methods

    Science.gov (United States)

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett

    2005-01-01

    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  10. Computational Fluid Dynamics Methods and Their Applications in Medical Science

    Directory of Open Access Journals (Sweden)

    Kowalewski Wojciech

    2016-12-01

    Full Text Available As defined by the National Institutes of Health: “Biomedical engineering integrates physical, chemical, mathematical, and computational sciences and engineering principles to study biology, medicine, behavior, and health”. Many issues in this area are closely related to fluid dynamics. This paper provides an overview of the basic concepts concerning Computational Fluid Dynamics and its applications in medicine.

  11. Methods for estimating heterocyclic amine concentrations in cooked meats in the US diet.

    Science.gov (United States)

    Keating, G A; Bogen, K T

    2001-01-01

    Heterocyclic amines (HAs) are formed in numerous cooked foods commonly consumed in the diet. A method was developed to estimate dietary HA levels using HA concentrations in experimentally cooked meats reported in the literature and meat consumption data obtained from a national dietary survey. Cooking variables (meat internal temperature and weight loss, surface temperature and time) were used to develop relationships for estimating total HA concentrations in six meat types. Concentrations of five individual HAs were estimated for specific meat type/cooking method combinations based on linear regression of total and individual HA values obtained from the literature. Using these relationships, total and individual HA concentrations were estimated for 21 meat type/cooking method combinations at four meat doneness levels. Reported consumption of the 21 meat type/cooking method combinations was obtained from a national dietary survey and the age-specific daily HA intake calculated using the estimated HA concentrations (ng/g) and reported meat intakes. Estimated mean daily total HA intakes for children (to age 15 years) and adults (30+ years) were 11 and 7.0 ng/kg/day, respectively, with 2-amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (PhIP) estimated to comprise approximately 65% of each intake. Pan-fried meats were the largest source of HA in the diet and chicken the largest source of HAs among the different meat types.

  12. Evaluation of the acute adverse reaction of contrast medium with high and moderate iodine concentration in patients undergoing computed tomography

    International Nuclear Information System (INIS)

    Nagamoto, Masashi; Gomi, Tatsuya; Terada, Hitoshi; Terada, Shigehiko; Kohda, Eiichi

    2006-01-01

    The aim of this prospective study was to evaluate and compare acute adverse reactions between contrast medium containing moderate and high concentrations of iodine in patients undergoing computed tomography (CT). A total of 945 patients undergoing enhanced CT were randomly assigned to receive one of two doses of contrast medium. We then prospectively investigated the incidence of adverse reactions. Iopamidol was used as the contrast medium, with a high concentration of 370 mgI/ml and a moderate concentration of 300 mgI/ml. The frequency of adverse reactions, such as pain at the injection site and heat sensation, were determined. Acute adverse reactions were observed in 2.4% (11/458) of the moderate-concentration group compared to 3.11% (15/482) of the high-concentration group; there was no significant difference in incidence between the two groups. Most adverse reactions were mild, and there was no significant difference in severity. One patient in the high-concentration group was seen to have a moderate adverse reaction. No correlation existed between the incidence of adverse reactions and patient characteristics such as sex, age, weight, flow amount, and flow rate. The incidence of pain was not significantly different between the two groups. In contrast, the incidence of heat sensation was significantly higher in the high-concentration group. The incidence and severity of acute adverse reactions were not significantly different between the two groups, and there were no severe adverse reactions in either group. (author)

  13. Time-resolved absorption and hemoglobin concentration difference maps: a method to retrieve depth-related information on cerebral hemodynamics.

    Science.gov (United States)

    Montcel, Bruno; Chabrier, Renée; Poulet, Patrick

    2006-12-01

    Time-resolved diffuse optical methods have been applied to detect hemodynamic changes induced by cerebral activity. We describe a near infrared spectroscopic (NIRS) reconstruction free method which allows retrieving depth-related information on absorption variations. Variations in the absorption coefficient of tissues have been computed over the duration of the whole experiment, but also over each temporal step of the time-resolved optical signal, using the microscopic Beer-Lambert law.Finite element simulations show that time-resolved computation of the absorption difference as a function of the propagation time of detected photons is sensitive to the depth profile of optical absorption variations. Differences in deoxyhemoglobin and oxyhemoglobin concentrations can also be calculated from multi-wavelength measurements. Experimental validations of the simulated results have been obtained for resin phantoms. They confirm that time-resolved computation of the absorption differences exhibited completely different behaviours, depending on whether these variations occurred deeply or superficially. The hemodynamic response to a short finger tapping stimulus was measured over the motor cortex and compared to experiments involving Valsalva manoeuvres. Functional maps were also calculated for the hemodynamic response induced by finger tapping movements.

  14. Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture

    Science.gov (United States)

    Sanfilippo, Antonio P [Richland, WA; Tratz, Stephen C [Richland, WA; Gregory, Michelle L [Richland, WA; Chappell, Alan R [Seattle, WA; Whitney, Paul D [Richland, WA; Posse, Christian [Seattle, WA; Baddeley, Robert L [Richland, WA; Hohimer, Ryan E [West Richland, WA

    2011-10-11

    Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.

  15. Advanced scientific computational methods and their applications to nuclear technologies. (3) Introduction of continuum simulation methods and their applications (3)

    International Nuclear Information System (INIS)

    Satake, Shin-ichi; Kunugi, Tomoaki

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)

  16. The rheology of concentrated dispersions: structure changes and shear thickening in experiments and computer simulations

    NARCIS (Netherlands)

    Boersma, W.H.; Laven, J.; Stein, H.N.; Moldenaers, P.; Keunings, R.

    1992-01-01

    The flow-induced changes in the microstructure and rheol. of very concd., shear thickening dispersions are studied. Results obtained for polystyrene sphere dispersions are compared with previous data and computer simulations to give better insight into the processes occurring in the dispersions. [on

  17. A Simple Method for Dynamic Scheduling in a Heterogeneous Computing System

    OpenAIRE

    Žumer, Viljem; Brest, Janez

    2002-01-01

    A simple method for the dynamic scheduling on a heterogeneous computing system is proposed in this paper. It was implemented to minimize the parallel program execution time. The proposed method decomposes the program workload into computationally homogeneous subtasks, which may be of the different size, depending on the current load of each machine in a heterogeneous computing system.

  18. Comparison of the acetyl bromide spectrophotometric method with other analytical lignin methods for determining lignin concentration in forage samples.

    Science.gov (United States)

    Fukushima, Romualdo S; Hatfield, Ronald D

    2004-06-16

    Present analytical methods to quantify lignin in herbaceous plants are not totally satisfactory. A spectrophotometric method, acetyl bromide soluble lignin (ABSL), has been employed to determine lignin concentration in a range of plant materials. In this work, lignin extracted with acidic dioxane was used to develop standard curves and to calculate the derived linear regression equation (slope equals absorptivity value or extinction coefficient) for determining the lignin concentration of respective cell wall samples. This procedure yielded lignin values that were different from those obtained with Klason lignin, acid detergent acid insoluble lignin, or permanganate lignin procedures. Correlations with in vitro dry matter or cell wall digestibility of samples were highest with data from the spectrophotometric technique. The ABSL method employing as standard lignin extracted with acidic dioxane has the potential to be employed as an analytical method to determine lignin concentration in a range of forage materials. It may be useful in developing a quick and easy method to predict in vitro digestibility on the basis of the total lignin content of a sample.

  19. A novel method to prepare concentrated conidial biomass formulation of Trichoderma harzianum for seed application.

    Science.gov (United States)

    Singh, P C; Nautiyal, C S

    2012-12-01

    To prepare concentrated formulation of Trichoderma harzianum MTCC-3841 (NBRI-1055) with high colony forming units (CFU), long shelf life and efficient in root colonization by a simple scrapping method. NBRI-1055 spores scrapped from potato dextrose agar plates were used to prepare a concentrated formulation after optimizing carrier material, moisture content and spore harvest time. The process provides an advantage of maintaining optimum moisture level by the addition of water rather than dehydration. The formulation had an initial 11-12 log(10) CFU g(-1). Its concentrated form reduces its application amount by 100 times (10 g 100 kg(-1) seed) and provides 3-4 log(10) CFU seed(-1). Shelf life of the product was experimentally determined at 30 and 40 °C and predicted at other temperatures following Arrhenius equation. The concentrated formulation as compared to similar products provides an extra advantage of smaller packaging for storage and transportation, cutting down product cost. Seed application of the formulation recorded significant increase in plant growth promotion. Stable and effective formulation of Trichoderma harzianum NBRI-1055 was obtained by a simple scrapping method. A new method for the production of concentrated, stable, effective and cost efficient formulation of T. harzianum has been validated for seed application. © 2012 The Society for Applied Microbiology.

  20. A method for calculation of dose per unit concentration values for aquatic biota

    International Nuclear Information System (INIS)

    Batlle, J Vives i; Jones, S R; Gomez-Ros, J M

    2004-01-01

    A dose per unit concentration database has been generated for application to ecosystem assessments within the FASSET framework. Organisms are represented by ellipsoids of appropriate dimensions, and the proportion of radiation absorbed within the organisms is calculated using a numerical method implemented in a series of spreadsheet-based programs. Energy-dependent absorbed fraction functions have been derived for calculating the total dose per unit concentration of radionuclides present in biota or in the media they inhabit. All radionuclides and reference organism dimensions defined within FASSET for marine and freshwater ecosystems are included. The methodology has been validated against more complex dosimetric models and compared with human dosimetry based on ICRP 72. Ecosystem assessments for aquatic biota within the FASSET framework can now be performed simply, once radionuclide concentrations in target organisms are known, either directly or indirectly by deduction from radionuclide concentrations in the surrounding medium

  1. Method of determining the radioactivity concentration in contaminated or activated components and materials, facility for implementing the method

    International Nuclear Information System (INIS)

    Kroeger, J.

    1988-01-01

    The invention concerns a method for the determination of the radiation activity of contaminated constructional components by means of detectors. The measured values are processed and displayed by a computer. From the given parameters the current and error-oriented threshold contamination or threshold impulse rate is determined continuously with the expiration of the gate time and is then compared with the actual measured value of the specific surface activity or mass-specific activity or impulse rate. (orig.) [de

  2. Comparison of the surface wave method and the indentation method for measuring the elasticity of gelatin phantoms of different concentrations.

    Science.gov (United States)

    Zhang, Xiaoming; Qiang, Bo; Greenleaf, James

    2011-02-01

    The speed of the surface Rayleigh wave, which is related to the viscoelastic properties of the medium, can be measured by noninvasive and noncontact methods. This technique has been applied in biomedical applications such as detecting skin diseases. Static spherical indentation, which quantifies material elasticity through the relationship between loading force and displacement, has been applied in various areas including a number of biomedical applications. This paper compares the results obtained from these two methods on five gelatin phantoms of different concentrations (5%, 7.5%, 10%, 12.5% and 15%). The concentrations are chosen because the elasticity of such gelatin phantoms is close to that of tissue types such as skin. The results show that both the surface wave method and the static spherical indentation method produce the same values for shear elasticity. For example, the shear elasticities measured by the surface wave method are 1.51, 2.75, 5.34, 6.90 and 8.40kPa on the five phantoms, respectively. In addition, by studying the dispersion curve of the surface wave speed, shear viscosity can be extracted. The measured shear viscosities are 0.00, 0.00, 0.13, 0.39 and 1.22Pa.s on the five phantoms, respectively. The results also show that the shear elasticity of the gelatin phantoms increases linearly with their prepared concentrations. The linear regressions between concentration and shear elasticity have R(2) values larger than 0.98 for both methods. Copyright © 2010 Elsevier B.V. All rights reserved.

  3. Simulation of laminar and turbulent concentric pipe flows with the isogeometric variational multiscale method

    KAUST Repository

    Ghaffari Motlagh, Yousef; Ahn, Hyungtaek; Hughes, Thomas Jr R; Calo, Victor M.

    2013-01-01

    We present an application of the residual-based variational multiscale modeling methodology to the computation of laminar and turbulent concentric annular pipe flows. Isogeometric analysis is utilized for higher-order approximation of the solution using Non-Uniform Rational B-Splines (NURBS). The ability of NURBS to exactly represent curved geometries makes NURBS-based isogeometric analysis attractive for the application to the flow through annular channels. We demonstrate the applicability of the methodology to both laminar and turbulent flow regimes. © 2012 Elsevier Ltd.

  4. Modelling of elementary computer operations using the intellect method

    Energy Technology Data Exchange (ETDEWEB)

    Shabanov-kushnarenko, Yu P

    1982-01-01

    The formal and apparatus intellect theory is used to describe functions of machine intelligence. A mathematical description is proposed as well as a machine realisation as switched networks of some simple computer operations. 5 references.

  5. Pair Programming as a Modern Method of Teaching Computer Science

    OpenAIRE

    Irena Nančovska Šerbec; Branko Kaučič; Jože Rugelj

    2008-01-01

    At the Faculty of Education, University of Ljubljana we educate future computer science teachers. Beside didactical, pedagogical, mathematical and other interdisciplinary knowledge, students gain knowledge and skills of programming that are crucial for computer science teachers. For all courses, the main emphasis is the absorption of professional competences, related to the teaching profession and the programming profile. The latter are selected according to the well-known document, the ACM C...

  6. A new method research of monitoring low concentration NO and SO2 mixed gas

    Science.gov (United States)

    Bo, Peng; Gao, Chao; Guo, Yongcai; Chen, Fang

    2018-01-01

    In order to reduce the pollution of the environment, China has implemented a new ultra-low emission control regulations for polluting gas, requiring new coal-fired power plant emissions SO2 less than 30ppm, NO less than 75ppm, NO2 less than 50ppm, Monitoring low concentration of NO and SO2 mixed gases , DOAS technology facing new challenges, SO2 absorb significantly weaken at the original absorption peak, what more the SNR is very low, it is difficult to extract the characteristic signal, and thus cannot obtain its concentration. So it cannot separate the signal of NO from the mixed gas at the wavelength of 200 230nm through the law of spectral superposition, it cannot calculate the concentration of NO. The classical DOAS technology cannot meet the needs of monitoring. In this paper, we found another absorption spectrum segment of SO2, the SNR is 10 times higher than before, Will not be affected by NO, can calculate the concentration of SO2 accurately, A new method of segmentation and demagnetization separation technology of spectral signals is proposed, which achieves the monitoring the low concentration mixed gas accurately. This function cannot be achieved by the classical DOAS. Detection limit of this method is 0.1ppm per meter which is higher than before, The relative error below 5% when the concentration between 0 5ppm, the concentration of NO between 6 75ppm and SO2 between 6 30ppm the relative error below 1.5%, it has made a great breakthrough In the low concentration of NO and SO2 monitoring. It has great scientific significance and reference value for the development of coal-fired power plant emission control, atmospheric environmental monitoring and high-precision on-line instrumentation.

  7. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-01

    research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing

  8. Organic farming methods affect the concentration of phenolic compounds in sea buckthorn leaves

    OpenAIRE

    Heinäaho, Merja; Pusenius, Jyrki; Julkunen-Tiitto, Riitta

    2006-01-01

    - Berries, juice and other products of organically cultivated sea buckthorn (Hippophae rhamnoides L., Eleagnaceae) are in great demand because of their beneficial health effects. The availability of berries is not sufficient especially for locally growing market. However no research has been done on the effects of farming methods on the quality of sea buckthorn tissues before. We established a field experiment to study the effects of organic farming methods on the concentra-tions of phenolic ...

  9. Assessing Methods for Mapping 2D Field Concentrations of CO2 Over Large Spatial Areas for Monitoring Time Varying Fluctuations

    Science.gov (United States)

    Zaccheo, T. S.; Pernini, T.; Botos, C.; Dobler, J. T.; Blume, N.; Braun, M.; Levine, Z. H.; Pintar, A. L.

    2014-12-01

    This work presents a methodology for constructing 2D estimates of CO2 field concentrations from integrated open path measurements of CO2 concentrations. It provides a description of the methodology, an assessment based on simulated data and results from preliminary field trials. The Greenhouse gas Laser Imaging Tomography Experiment (GreenLITE) system, currently under development by Exelis and AER, consists of a set of laser-based transceivers and a number of retro-reflectors coupled with a cloud-based compute environment to enable real-time monitoring of integrated CO2 path concentrations, and provides 2D maps of estimated concentrations over an extended area of interest. The GreenLITE transceiver-reflector pairs provide laser absorption spectroscopy (LAS) measurements of differential absorption due to CO2 along intersecting chords within the field of interest. These differential absorption values for the intersecting chords of horizontal path are not only used to construct estimated values of integrated concentration, but also employed in an optimal estimation technique to derive 2D maps of underlying concentration fields. This optimal estimation technique combines these sparse data with in situ measurements of wind speed/direction and an analytic plume model to provide tomographic-like reconstruction of the field of interest. This work provides an assessment of this reconstruction method and preliminary results from the Fall 2014 testing at the Zero Emissions Research and Technology (ZERT) site in Bozeman, Montana. This work is funded in part under the GreenLITE program developed under a cooperative agreement between Exelis and the National Energy and Technology Laboratory (NETL) under the Department of Energy (DOE), contract # DE-FE0012574. Atmospheric and Environmental Research, Inc. is a major partner in this development.

  10. Developing an interactive computational system to simulate radon concentration inside ancient egyptian tombs

    Energy Technology Data Exchange (ETDEWEB)

    Metwally, S M; Salama, E; El-Fikia, S A [Faculty of Science, Department of Physics, Ain Shams University, P. O. Box 11566, Cairo (Egypt); Abo-EImagd, M; Eissa, H M [National Institute for Standard, Radiation Measurements Department, P. O. Box 136Giza code no. 12211 RSSP (Egypt)

    2007-06-15

    RSSP (Radon Scale Software Package) is an interactive support system that simulates the radon concentration inside ancient Egyptian tombs and the consequences on the population in terms of internal and external exposure. RSSP consists of three interconnected modules: the first one simulates the radon concentration inside ancient Egyptian tombs using a developed mathematical model. This model introduces the possibility of controlling the rate of radon accumulation via additional artificial ventilation systems. The source of inputs is an editable database for the tombs includes the geometrical dimensions and some environmental parameters like temperature and outdoor radon concentration at the tombs locations. The second module simulates the absorbed dose due to internal exposure of radon and its progeny. The third module simulates the absorbed dose due to external exposure of Gamma rays emitted from the tomb wall rocks. RSSP introduces the facility of following the progress of radon concentration as well as Internal and external absorbed dose in a wide range of time (seconds, minutes, hours and days) via numerical data and the corresponding graphical interface.

  11. Developing an interactive computational system to simulate radon concentration inside ancient egyptian tombs

    International Nuclear Information System (INIS)

    Metwally, S. M.; Salama, E.; El-Fikia, S. A.; Abo-EImagd, M.; Eissa, H. M.

    2007-01-01

    RSSP (Radon Scale Software Package) is an interactive support system that simulates the radon concentration inside ancient Egyptian tombs and the consequences on the population in terms of internal and external exposure. RSSP consists of three interconnected modules: the first one simulates the radon concentration inside ancient Egyptian tombs using a developed mathematical model. This model introduces the possibility of controlling the rate of radon accumulation via additional artificial ventilation systems. The source of inputs is an editable database for the tombs includes the geometrical dimensions and some environmental parameters like temperature and outdoor radon concentration at the tombs locations. The second module simulates the absorbed dose due to internal exposure of radon and its progeny. The third module simulates the absorbed dose due to external exposure of Gamma rays emitted from the tomb wall rocks. RSSP introduces the facility of following the progress of radon concentration as well as Internal and external absorbed dose in a wide range of time (seconds, minutes, hours and days) via numerical data and the corresponding graphical interface

  12. Design of a concentric heat exchanger using computational fluid dynamics as design tool

    NARCIS (Netherlands)

    Oosterhuis, Joris; Bühler, Simon; wilcox, D; van der Meer, Theodorus H.

    2013-01-01

    A concentric gas-to-gas heat exchanger is designed for application as a recuperator in the domestic boiler industry. The recuperator recovers heat from the exhaust gases of a combustion process to preheat the ingoing gaseous fuel mixture resulting in increased fuel efficiency. This applied study

  13. Computational fluid dynamics simulations of flow and concentration polarization in forward osmosis membrane systems

    DEFF Research Database (Denmark)

    Gruber, M.F.; Johnson, C.J.; Tang, C.Y.

    2011-01-01

    is inspired by previously published CFD models for pressure-driven systems and the general analytical theory for flux modeling in asymmetric membranes. Simulations reveal a non-negligible external concentration polarization on the porous support, even when accounting for high cross-flow velocity and slip...

  14. Computation of major solute concentrations and loads in German rivers using regression analysis.

    Science.gov (United States)

    Steele, T.D.

    1980-01-01

    Regression functions between concentrations of several inorganic solutes and specific conductance and between specific conductance and stream discharge were derived from intermittent samples collected for 2 rivers in West Germany. These functions, in conjunction with daily records of streamflow, were used to determine monthly and annual solute loadings. -from Author

  15. A second-generation computational modeling of cardiac electrophysiology: response of action potential to ionic concentration changes and metabolic inhibition.

    Science.gov (United States)

    Alaa, Nour Eddine; Lefraich, Hamid; El Malki, Imane

    2014-10-21

    Cardiac arrhythmias are becoming one of the major health care problem in the world, causing numerous serious disease conditions including stroke and sudden cardiac death. Furthermore, cardiac arrhythmias are intimately related to the signaling ability of cardiac cells, and are caused by signaling defects. Consequently, modeling the electrical activity of the heart, and the complex signaling models that subtend dangerous arrhythmias such as tachycardia and fibrillation, necessitates a quantitative model of action potential (AP) propagation. Yet, many electrophysiological models, which accurately reproduce dynamical characteristic of the action potential in cells, have been introduced. However, these models are very complex and are very time consuming computationally. Consequently, a large amount of research is consecrated to design models with less computational complexity. This paper is presenting a new model for analyzing the propagation of ionic concentrations and electrical potential in space and time. In this model, the transport of ions is governed by Nernst-Planck flux equation (NP), and the electrical interaction of the species is described by a new cable equation. These set of equations form a system of coupled partial nonlinear differential equations that is solved numerically. In the first we describe the mathematical model. To realize the numerical simulation of our model, we proceed by a finite element discretization and then we choose an appropriate resolution algorithm. We give numerical simulations obtained for different input scenarios in the case of suicide substrate reaction which were compared to those obtained in literature. These input scenarios have been chosen so as to provide an intuitive understanding of dynamics of the model. By accessing time and space domains, it is shown that interpreting the electrical potential of cell membrane at steady state is incorrect. This model is general and applies to ions of any charge in space and time

  16. Method for recovering aroma concentrate from a caffeine- or theobromine-comprising food base material

    NARCIS (Netherlands)

    Kattenberg, H.R.; Willemsen, J.H.A.; Starmans, D.A.J.; Hoving, H.D.; Winters, M.G.M.

    2002-01-01

    Described is a method for recovering aroma concentrate from a caffeine- or theobromine-comprising food base material, such as coffee or tea, and in particular cocoa, at least comprising the steps of: introducing the food base material into an aqueous extractant and incubating the food base material

  17. A fast nonlinear conjugate gradient based method for 3D concentrated frictional contact problems

    NARCIS (Netherlands)

    J. Zhao (Jing); E.A.H. Vollebregt (Edwin); C.W. Oosterlee (Cornelis)

    2015-01-01

    htmlabstractThis paper presents a fast numerical solver for a nonlinear constrained optimization problem, arising from 3D concentrated frictional shift and rolling contact problems with dry Coulomb friction. The solver combines an active set strategy with a nonlinear conjugate gradient method. One

  18. Methods to assess high-resolution subsurface gas concentrations and gas fluxes in wetland ecosystems

    DEFF Research Database (Denmark)

    Elberling, Bo; Kühl, Michael; Glud, Ronnie Nøhr

    2013-01-01

    The need for measurements of soil gas concentrations and surface fluxes of greenhouse gases at high temporal and spatial resolution in wetland ecosystem has lead to the introduction of several new analytical techniques and methods. In addition to the automated flux chamber methodology for high-re...

  19. High performance computing and quantum trajectory method in CPU and GPU systems

    International Nuclear Information System (INIS)

    Wiśniewska, Joanna; Sawerwain, Marek; Leoński, Wiesław

    2015-01-01

    Nowadays, a dynamic progress in computational techniques allows for development of various methods, which offer significant speed-up of computations, especially those related to the problems of quantum optics and quantum computing. In this work, we propose computational solutions which re-implement the quantum trajectory method (QTM) algorithm in modern parallel computation environments in which multi-core CPUs and modern many-core GPUs can be used. In consequence, new computational routines are developed in more effective way than those applied in other commonly used packages, such as Quantum Optics Toolbox (QOT) for Matlab or QuTIP for Python

  20. Comparison of point-of-care methods for preparation of platelet concentrate (platelet-rich plasma).

    Science.gov (United States)

    Weibrich, Gernot; Kleis, Wilfried K G; Streckbein, Philipp; Moergel, Maximilian; Hitzler, Walter E; Hafner, Gerd

    2012-01-01

    This study analyzed the concentrations of platelets and growth factors in platelet-rich plasma (PRP), which are likely to depend on the method used for its production. The cellular composition and growth factor content of platelet concentrates (platelet-rich plasma) produced by six different procedures were quantitatively analyzed and compared. Platelet and leukocyte counts were determined on an automatic cell counter, and analysis of growth factors was performed using enzyme-linked immunosorbent assay. The principal differences between the analyzed PRP production methods (blood bank method of intermittent flow centrifuge system/platelet apheresis and by the five point-of-care methods) and the resulting platelet concentrates were evaluated with regard to resulting platelet, leukocyte, and growth factor levels. The platelet counts in both whole blood and PRP were generally higher in women than in men; no differences were observed with regard to age. Statistical analysis of platelet-derived growth factor AB (PDGF-AB) and transforming growth factor β1 (TGF-β1) showed no differences with regard to age or gender. Platelet counts and TGF-β1 concentration correlated closely, as did platelet counts and PDGF-AB levels. There were only rare correlations between leukocyte counts and PDGF-AB levels, but comparison of leukocyte counts and PDGF-AB levels demonstrated certain parallel tendencies. TGF-β1 levels derive in substantial part from platelets and emphasize the role of leukocytes, in addition to that of platelets, as a source of growth factors in PRP. All methods of producing PRP showed high variability in platelet counts and growth factor levels. The highest growth factor levels were found in the PRP prepared using the Platelet Concentrate Collection System manufactured by Biomet 3i.

  1. X-ray computed tomography imaging method which is immune to beam hardening effect

    International Nuclear Information System (INIS)

    Kanno, Ikuo; Uesaka, Akio; Nomiya, Seiichiro; Onabe, Hideaki

    2009-01-01

    For the easy treatment of cancers, early finding of them is an important theme of study. X-ray transmission measurement and computed tomography (CT) are powerful tools for finding cancers. The x-ray CT shows cross sectional view of human body and is able to detect small cancers such as 1 cm in diameter. The CT, however, gives very high dose exposure to human body: some 10 to 1000 times higher dose exposure than the chest radiography. It is not possible to have medical health check using CT frequently, in view of both individual and public accumulated dose exposures. The authors have been working on the reduction of dose exposure in x-ray transmission measurements in case of detecting iodine contrast media, which concentrates in cancers. In our method, energy information of x-rays is employed: in conventional x-ray transmission measurements, x-rays are measured as current and the energy of each x-ray is ignored. The numbers of x-ray events, φ 1 and φ 2 , of which energies are lower and higher than the one of iodine K-edge, respectively, are used for the estimation of iodine thickness in cancers. Moreover, high energy x-rays, which are not sensitive to the absorption by iodine, are cut by a filter made of higher atomic number material than iodine. We call this method filtered x-ray energy subtraction (FIX-ES) method. This FIX-ES method was shown twice as sensitive to iodine than current measurement method. With the choice of filter thickness, minimum dose exposure in FIX-ES is 30% of that when white x-rays are employed. In the study described above, we concentrated on the observation of cancer part. In this study, a cancer phantom in normal tissue is observed by FIX-ES method. The results are compared with the ones obtained by current measurement method. (author)

  2. Evaluating the efficiency of divestiture policy in promoting competitiveness using an analytical method and agent-based computational economics

    Energy Technology Data Exchange (ETDEWEB)

    Rahimiyan, Morteza; Rajabi Mashhadi, Habib [Department of Electrical Engineering, Faculty of Engineering, Ferdowsi University of Mashhad, Mashhad (Iran)

    2010-03-15

    Choosing a desired policy for divestiture of dominant firms' generation assets has been a challenging task and open question for regulatory authority. To deal with this problem, in this paper, an analytical method and agent-based computational economics (ACE) approach are used for ex-ante analysis of divestiture policy in reducing market power. The analytical method is applied to solve a designed concentration boundary problem, even for situations where the cost data of generators are unknown. The concentration boundary problem is the problem of minimizing or maximizing market concentration subject to operation constraints of the electricity market. It is proved here that the market concentration corresponding to operation condition is certainly viable in an interval calculated by the analytical method. For situations where the cost function of generators is available, the ACE is used to model the electricity market. In ACE, each power producer's profit-maximization problem is solved by the computational approach of Q-learning. The power producer using the Q-learning method learns from past experiences to implicitly identify the market power, and find desired response in competing with the rivals. Both methods are applied in a multi-area power system and effects of different divestiture policies on market behavior are analyzed. (author)

  3. Evaluating the efficiency of divestiture policy in promoting competitiveness using an analytical method and agent-based computational economics

    International Nuclear Information System (INIS)

    Rahimiyan, Morteza; Rajabi Mashhadi, Habib

    2010-01-01

    Choosing a desired policy for divestiture of dominant firms' generation assets has been a challenging task and open question for regulatory authority. To deal with this problem, in this paper, an analytical method and agent-based computational economics (ACE) approach are used for ex-ante analysis of divestiture policy in reducing market power. The analytical method is applied to solve a designed concentration boundary problem, even for situations where the cost data of generators are unknown. The concentration boundary problem is the problem of minimizing or maximizing market concentration subject to operation constraints of the electricity market. It is proved here that the market concentration corresponding to operation condition is certainly viable in an interval calculated by the analytical method. For situations where the cost function of generators is available, the ACE is used to model the electricity market. In ACE, each power producer's profit-maximization problem is solved by the computational approach of Q-learning. The power producer using the Q-learning method learns from past experiences to implicitly identify the market power, and find desired response in competing with the rivals. Both methods are applied in a multi-area power system and effects of different divestiture policies on market behavior are analyzed.

  4. Automated Analysis of Human Sperm Number and Concentration (Oligospermia) Using Otsu Threshold Method and Labelling

    Science.gov (United States)

    Susrama, I. G.; Purnama, K. E.; Purnomo, M. H.

    2016-01-01

    Oligospermia is a male fertility issue defined as a low sperm concentration in the ejaculate. Normally the sperm concentration is 20-120 million/ml, while Oligospermia patients has sperm concentration less than 20 million/ml. Sperm test done in the fertility laboratory to determine oligospermia by checking fresh sperm according to WHO standards in 2010 [9]. The sperm seen in a microscope using a Neubauer improved counting chamber and manually count the number of sperm. In order to be counted automatically, this research made an automation system to analyse and count the sperm concentration called Automated Analysis of Sperm Concentration Counters (A2SC2) using Otsu threshold segmentation process and morphology. Data sperm used is the fresh sperm directly in the analysis in the laboratory from 10 people. The test results using A2SC2 method obtained an accuracy of 91%. Thus in this study, A2SC2 can be used to calculate the amount and concentration of sperm automatically

  5. Effect of local burn-up variation on computed mean nuclide concentrations

    International Nuclear Information System (INIS)

    Moeller, W.

    1982-01-01

    Mean concentrations of U-235, U-236, U-238, Pu-239, Pu-240, Pu-241 and Pu-242 in some volume areas of WWER-440 fuel assemblies have been calculated from corresponding burn-up microdistribution data and compared with those calculated from burn-up mean values. Differences occurring were below 3% for the uranium nuclides but, at low burn-ups, considerable for Pu-241 and Pu-242. (author)

  6. A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system

    Science.gov (United States)

    Jia, Meng; Fan, Yang-Yu; Tian, Wei-Jian

    2011-03-01

    Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. Project supported by the National Natural Science Foundation of China (Grant No. 60872159).

  7. A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system

    International Nuclear Information System (INIS)

    Jia Meng; Fan Yang-Yu; Tian Wei-Jian

    2011-01-01

    Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  8. Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots

    Directory of Open Access Journals (Sweden)

    Ching-Long Shih

    2012-08-01

    Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on-line numerical computations. Based on the decomposition approach and cross-product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on-line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed-chain robot systems.

  9. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-11

    Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advanced computational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR.

  10. Simplified method of computation for fatigue crack growth

    International Nuclear Information System (INIS)

    Stahlberg, R.

    1978-01-01

    A procedure is described for drastically reducing the computation time in calculating crack growth for variable-amplitude fatigue loading when the loading sequence is periodic. By the proposed procedure, the crack growth, r, per loading is approximated as a smooth function and its reciprocal is integrated, rather than summing crack growth cycle by cycle. The savings in computation time results since only a few pointwise values of r must be computed to generate an accurate interpolation function for numerical integration. Further time savings can be achieved by selecting the stress intensity coefficient (stress intensity divided by load) as the argument of r. Once r has been obtained as a function of stress intensity coefficient for a given material, environment, and loading sequence, it applies to any configuration of cracked structure. (orig.) [de

  11. A hybrid method for the parallel computation of Green's functions

    DEFF Research Database (Denmark)

    Petersen, Dan Erik; Li, Song; Stokbro, Kurt

    2009-01-01

    of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds...... of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only...... require computing a small number of entries of the inverse matrix. Then. we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size....

  12. A cost-benefit analysis of methods for the determination of biomass concentration in wastewater treatment.

    Science.gov (United States)

    Hernandez, J E; Bachmann, R T; Edyvean, R G J

    2006-01-01

    The measurement of biomass concentration is important in biological wastewater treatment. This paper compares the accuracy and costs of the traditional volatile suspended solids (VSS) and the proposed suspended organic carbon (SOC) methods. VSS and SOC values of a dilution system were very well correlated (R(2)=0.9995). VSS and SOC of 16 samples were determined, the mean SOC/VSS ratio (0.52, n=16, sigma=0.01) was close to the theoretical value (0.53). For costing analysis, two hypothetical cases were analysed. In case A, it is assumed that 108 samples are analysed annually from two continuous reactors. Case B represents a batch experiment to be carried out in 24 incubated serum bottles. The savings, when using the SOC method, were 11,987 pounds for case A and 90 pounds for case B. This study suggests the use of SOC method as a time saving and lower cost biomass concentration measurement.

  13. Estimation of very low concentrations of Ruthenium by spectrophotometric method using barbituric acid as complexing agent

    International Nuclear Information System (INIS)

    Ramakrishna Reddy, S.; Srinivasan, R.; Mallika, C.; Kamachi Mudali, U.; Natarajan, R.

    2012-01-01

    Spectrophotometric method employing numerous chromogenic reagents like thiourea, 1,10-phenanthroline, thiocyanate and tropolone is reported in the literature for the estimation of very low concentrations of Ru. A sensitive spectrophotometric method has been developed for the determination of ruthenium in the concentration range 1.5 to 6.5 ppm in the present work. This method is based on the reaction of ruthenium with barbituric acid to produce ruthenium(ll)tris-violurate, (Ru(H 2 Va) 3 ) -1 complex which gives a stable deep-red coloured solution. The maximum absorption of the complex is at 491 nm due to the inverted t 2g → Π(L-L ligand) electron - transfer transition. The molar absorptivity of the coloured species is 9,851 dm 3 mol -1 cm -1

  14. Comparison of four classification methods for brain-computer interface

    Czech Academy of Sciences Publication Activity Database

    Frolov, A.; Húsek, Dušan; Bobrov, P.

    2011-01-01

    Roč. 21, č. 2 (2011), s. 101-115 ISSN 1210-0552 R&D Projects: GA MŠk(CZ) 1M0567; GA ČR GA201/05/0079; GA ČR GAP202/10/0262 Institutional research plan: CEZ:AV0Z10300504 Keywords : brain computer interface * motor imagery * visual imagery * EEG pattern classification * Bayesian classification * Common Spatial Patterns * Common Tensor Discriminant Analysis Subject RIV: IN - Informatics, Computer Science Impact factor: 0.646, year: 2011

  15. The role and importance of ozone for atmospheric chemistry and methods for measuring its concentration

    Directory of Open Access Journals (Sweden)

    Marković Dragan M.

    2003-01-01

    Full Text Available Depending on where ozone resides, it can protect or harm life on Earth. The thin layer of ozone that surrounds Earth acts as a shield protecting the planet from irradiation by UV light. When it is close to the planet's surface, ozone is a powerful photochemical oxidant that damage, icons frescos, museum exhibits, rubber, plastic and all plant and animal life. Besides the basic properties of some methods for determining the ozone concentration in working and living conditions, this paper presents a detailed description of the electrochemical method. The basic properties of the electrochemical method are used in the construction of mobile equipment for determining the sum of oxidants in the atmosphere. The equipment was used for testing the determination of the ozone concentration in working rooms, where the concentration was at a high level and caused by UV radiation or electrostatic discharge. According to the obtained results, it can be concluded that this equipment for determining the ozone concentration in the atmosphere is very powerful and reproducible in measurements.

  16. Verification of statistical method CORN for modeling of microfuel in the case of high grain concentration

    Energy Technology Data Exchange (ETDEWEB)

    Chukbar, B. K., E-mail: bchukbar@mail.ru [National Research Center Kurchatov Institute (Russian Federation)

    2015-12-15

    Two methods of modeling a double-heterogeneity fuel are studied: the deterministic positioning and the statistical method CORN of the MCU software package. The effect of distribution of microfuel in a pebble bed on the calculation results is studied. The results of verification of the statistical method CORN for the cases of the microfuel concentration up to 170 cm{sup –3} in a pebble bed are presented. The admissibility of homogenization of the microfuel coating with the graphite matrix is studied. The dependence of the reactivity on the relative location of fuel and graphite spheres in a pebble bed is found.

  17. Quantification of total aluminium concentrations in food samples at trace levels by INAA and PIGE methods

    International Nuclear Information System (INIS)

    Nanda, Braja B.; Acharya, R.

    2017-01-01

    Total aluminium contents in various food samples were determined by Instrumental Neutron Activation Analysis (INAA) and Particle Induced Gamma-ray Emission (PIGE) methods. A total of 16 rice samples, collected from the field, were analyzed by INAA using reactor neutrons from Dhruva reactor. Whereas a total 17 spices collected from market, were analyzed by both INAA and PIGE methods in conjunction with high resolution gamma-ray spectrometry. Aluminium concentration values were found to be in the range of 19-845 mg kg -1 for spices and 15-104 mg kg -1 for rice samples. The methods were validated by analyzing standard reference materials (SRMs) form NIST. (author)

  18. The Extrapolation-Accelerated Multilevel Aggregation Method in PageRank Computation

    Directory of Open Access Journals (Sweden)

    Bing-Yuan Pu

    2013-01-01

    Full Text Available An accelerated multilevel aggregation method is presented for calculating the stationary probability vector of an irreducible stochastic matrix in PageRank computation, where the vector extrapolation method is its accelerator. We show how to periodically combine the extrapolation method together with the multilevel aggregation method on the finest level for speeding up the PageRank computation. Detailed numerical results are given to illustrate the behavior of this method, and comparisons with the typical methods are also made.

  19. AN ENHANCED METHOD FOREXTENDING COMPUTATION AND RESOURCES BY MINIMIZING SERVICE DELAY IN EDGE CLOUD COMPUTING

    OpenAIRE

    B.Bavishna*1, Mrs.M.Agalya2 & Dr.G.Kavitha3

    2018-01-01

    A lot of research has been done in the field of cloud computing in computing domain. For its effective performance, variety of algorithms has been proposed. The role of virtualization is significant and its performance is dependent on VM Migration and allocation. More of the energy is absorbed in cloud; therefore, the utilization of numerous algorithms is required for saving energy and efficiency enhancement in the proposed work. In the proposed work, green algorithm has been considered with ...

  20. Computational methods to dissect cis-regulatory transcriptional ...

    Indian Academy of Sciences (India)

    The formation of diverse cell types from an invariant set of genes is governed by biochemical and molecular processes that regulate gene activity. A complete understanding of the regulatory mechanisms of gene expression is the major function of genomics. Computational genomics is a rapidly emerging area for ...

  1. Computer methods in designing tourist equipment for people with disabilities

    Science.gov (United States)

    Zuzda, Jolanta GraŻyna; Borkowski, Piotr; Popławska, Justyna; Latosiewicz, Robert; Moska, Eleonora

    2017-11-01

    Modern technologies enable disabled people to enjoy physical activity every day. Many new structures are matched individually and created for people who fancy active tourism, giving them wider opportunities for active pastime. The process of creating this type of devices in every stage, from initial design through assessment to validation, is assisted by various types of computer support software.

  2. New design methods for computer aided architecturald design methodology teaching

    NARCIS (Netherlands)

    Achten, H.H.

    2003-01-01

    Architects and architectural students are exploring new ways of design using Computer Aided Architectural Design software. This exploration is seldom backed up from a design methodological viewpoint. In this paper, a design methodological framework for reflection on innovate design processes by

  3. Computational methods for more fuel-efficient ship

    NARCIS (Netherlands)

    Koren, B.

    2008-01-01

    The flow of water around a ship powered by a combustion engine is a key factor in the ship's fuel consumption. The simulation of flow patterns around ship hulls is therefore an important aspect of ship design. While lengthy computations are required for such simulations, research by Jeroen Wackers

  4. New Methods of Mobile Computing: From Smartphones to Smart Education

    Science.gov (United States)

    Sykes, Edward R.

    2014-01-01

    Every aspect of our daily lives has been touched by the ubiquitous nature of mobile devices. We have experienced an exponential growth of mobile computing--a trend that seems to have no limit. This paper provides a report on the findings of a recent offering of an iPhone Application Development course at Sheridan College, Ontario, Canada. It…

  5. An affective music player: Methods and models for physiological computing

    NARCIS (Netherlands)

    Janssen, J.H.; Westerink, J.H.D.M.; van den Broek, Egon

    2009-01-01

    Affective computing is embraced by many to create more intelligent systems and smart environments. In this thesis, a specific affective application is envisioned: an affective physiological music player (APMP), which should be able to direct its user's mood. In a first study, the relationship

  6. Computer Facilitated Mathematical Methods in Chemical Engineering--Similarity Solution

    Science.gov (United States)

    Subramanian, Venkat R.

    2006-01-01

    High-performance computers coupled with highly efficient numerical schemes and user-friendly software packages have helped instructors to teach numerical solutions and analysis of various nonlinear models more efficiently in the classroom. One of the main objectives of a model is to provide insight about the system of interest. Analytical…

  7. All for One: Integrating Budgetary Methods by Computer.

    Science.gov (United States)

    Herman, Jerry J.

    1994-01-01

    With the advent of high speed and sophisticated computer programs, all budgetary systems can be combined in one fiscal management information system. Defines and provides examples for the four budgeting systems: (1) function/object; (2) planning, programming, budgeting system; (3) zero-based budgeting; and (4) site-based budgeting. (MLF)

  8. Method for quantitative assessment of nuclear safety computer codes

    International Nuclear Information System (INIS)

    Dearien, J.A.; Davis, C.B.; Matthews, L.J.

    1979-01-01

    A procedure has been developed for the quantitative assessment of nuclear safety computer codes and tested by comparison of RELAP4/MOD6 predictions with results from two Semiscale tests. This paper describes the developed procedure, the application of the procedure to the Semiscale tests, and the results obtained from the comparison

  9. A Parameter Estimation Method for Dynamic Computational Cognitive Models

    NARCIS (Netherlands)

    Thilakarathne, D.J.

    2015-01-01

    A dynamic computational cognitive model can be used to explore a selected complex cognitive phenomenon by providing some features or patterns over time. More specifically, it can be used to simulate, analyse and explain the behaviour of such a cognitive phenomenon. It generates output data in the

  10. Computed radiography imaging plates and associated methods of manufacture

    Science.gov (United States)

    Henry, Nathaniel F.; Moses, Alex K.

    2015-08-18

    Computed radiography imaging plates incorporating an intensifying material that is coupled to or intermixed with the phosphor layer, allowing electrons and/or low energy x-rays to impart their energy on the phosphor layer, while decreasing internal scattering and increasing resolution. The radiation needed to perform radiography can also be reduced as a result.

  11. Verifying a computational method for predicting extreme ground motion

    Science.gov (United States)

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  12. A Rapid Method for Determining the Concentration of Recombinant Protein Secreted from Pichia pastoris

    International Nuclear Information System (INIS)

    Sun, L W; Zhao, Y; Jiang, R; Song, Y; Feng, H; Feng, K; Niu, L P; Qi, C

    2011-01-01

    Pichia secretive expression system is one of powerful eukaryotic expression systems in genetic engineering, which is especially suitable for industrial utilization. Because of the low concentration of the target protein in initial experiment, the methods and conditions for expression of the target protein should be optimized according to the protein yield repetitively. It is necessary to set up a rapid, simple and convenient analysis method for protein expression levels instead of the generally used method such as ultrafiltration, purification, dialysis, lyophilization and so on. In this paper, acetone precipitation method was chosen to concentrate the recombinant protein firstly after comparing with four different protein precipitation methods systematically, and then the protein was analyzed by SDS-Polyacrylamide Gel Electrophoresis. The recombinant protein was determined with the feature of protein band by the Automated Image Capture and 1-D Analysis Software directly. With this method, the optimized expression conditions of basic fibroblast growth factor secreted from pichia were obtained, which is as the same as using traditional methods. Hence, a convenient tool to determine the optimized conditions for the expression of recombinant proteins in Pichia was established.

  13. Evaluation of a Method for Quantifying Eugenol Concentrations in the Fillet Tissue from Freshwater Fish Species.

    Science.gov (United States)

    Meinertz, Jeffery R; Schreier, Theresa M; Porcher, Scott T; Smerud, Justin R

    2016-01-01

    AQUI-S 20E(®) (active ingredient, eugenol; AQUI-S New Zealand Ltd, Lower Hutt, New Zealand) is being pursued for approval as an immediate-release sedative in the United States. A validated method to quantify the primary residue (the marker residue) in fillet tissue from AQUI-S 20E-exposed fish was needed. A method was evaluated for determining concentrations of the AQUI-S 20E marker residue, eugenol, in freshwater fish fillet tissue. Method accuracies from fillet tissue fortified at nominal concentrations of 0.15, 1, and 60 μg/g from six fish species ranged from 88-102%. Within-day and between-day method precisions (% CV) from the fortified tissue were ≤8.4% CV. There were no coextracted compounds from the control fillet tissue of seven fish species that interfered with eugenol analyses. Six compounds used as aquaculture drugs did not interfere with eugenol analyses. The lower limit of quantitation (LLOQ) was 0.012 μg/g. The method was robust, i.e., in most cases, minor changes to the method did not impact method performance. Eugenol was stable in acetonitrile-water (3 + 7, v/v) for at least 14 days, in fillet tissue extracts for 4 days, and in fillet tissue stored at ~ -80°C for at least 84 days.

  14. A comparison of methods for the assessment of postural load and duration of computer use

    NARCIS (Netherlands)

    Heinrich, J.; Blatter, B.M.; Bongers, P.M.

    2004-01-01

    Aim: To compare two different methods for assessment of postural load and duration of computer use in office workers. Methods: The study population existed of 87 computer workers. Questionnaire data about exposure were compared with exposures measured by a standardised or objective method. Measuring

  15. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    Science.gov (United States)

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  16. Array Manipulation And Matrix-Tree Method To Identify High Concentration Regions HCRs

    Directory of Open Access Journals (Sweden)

    Rachana Arora

    2015-08-01

    Full Text Available Abstract-Sequence Alignment and Analysis is one of the most important applications of bioinformatics. It involves alignment a pair or more sequences with each other and identify a common pattern that would ultimately lead to conclusions of homology or dissimilarity. A number of algorithms that make use of dynamic programming to perform alignment between sequences are available. One of their main disadvantages is that they involve complicated computations and backtracking methods that are difficult to implement. This paper describes a much simpler method to identify common regions in 2 sequences and align them based on the density of common sequences identified.

  17. Control of the nitrogen concentration in liquid lithium by the hot trap method

    International Nuclear Information System (INIS)

    Sakurai, Toshiharu; Yoneoka, Toshiaki; Tanaka, Satoru; Suzuki, Akihiro; Muroga, Takeo

    2002-01-01

    The nitrogen concentration in liquid lithium was controlled by the hot-trap method. Titanium, vanadium and a V-Ti alloy were used as nitrogen gettering materials. Gettering experiments were conducted at 673, 773 and 823 K for 0.4-2.8 Ms. After immersion, the nitrogen concentration increased in titanium and V-Ti were tested at 823 K. Especially the nitrogen gettering effect by the V-10at.%Ti alloy was found to be large. Nitrogen was considered to exist mainly as solid solution in the V-10at.%Ti alloy. The decrease of the nitrogen concentration in liquid lithium by the V-Ti gettering was also confirmed

  18. A method of measuring gold nanoparticle concentrations by x-ray fluorescence for biomedical applications

    Energy Technology Data Exchange (ETDEWEB)

    Wu Di; Li Yuhua; Wong, Molly D.; Liu Hong [Center for Bioengineering and School of Electrical and Computer Engineering, University of Oklahoma, Norman, Oklahoma 73019 (United States)

    2013-05-15

    Purpose: This paper reports a technique that enables the quantitative determination of the concentration of gold nanoparticles (GNPs) through the accurate detection of their fluorescence radiation in the diagnostic x-ray spectrum. Methods: Experimentally, x-ray fluorescence spectra of 1.9 and 15 nm GNP solutions are measured using an x-ray spectrometer, individually and within chicken breast tissue samples. An optimal combination of excitation and emission filters is determined to segregate the fluorescence spectra at 66.99 and 68.80 keV from the background scattering. A roadmap method is developed that subtracts the scattered radiation (acquired before the insertion of GNP solutions) from the signal radiation acquired after the GNP solutions are inserted. Results: The methods effectively minimize the background scattering in the spectrum measurements, showing linear relationships between GNP solutions from 0.1% to 10% weight concentration and from 0.1% to 1.0% weight concentration inside a chicken breast tissue sample. Conclusions: The investigation demonstrated the potential of imaging gold nanoparticles quantitatively in vivo for in-tissue studies, but future studies will be needed to investigate the ability to apply this method to clinical applications.

  19. Evaluation of Cross-Section Sensitivities in Computing Burnup Credit Fission Product Concentrations

    International Nuclear Information System (INIS)

    Gauld, I.C.

    2005-01-01

    U.S. Nuclear Regulatory Commission Interim Staff Guidance 8 (ISG-8) for burnup credit covers actinides only, a position based primarily on the lack of definitive critical experiments and adequate radiochemical assay data that can be used to quantify the uncertainty associated with fission product credit. The accuracy of fission product neutron cross sections is paramount to the accuracy of criticality analyses that credit fission products in two respects: (1) the microscopic cross sections determine the reactivity worth of the fission products in spent fuel and (2) the cross sections determine the reaction rates during irradiation and thus influence the accuracy of predicted final concentrations of the fission products in the spent fuel. This report evaluates and quantifies the importance of the fission product cross sections in predicting concentrations of fission products proposed for use in burnup credit. The study includes an assessment of the major fission products in burnup credit and their production precursors. Finally, the cross-section importances, or sensitivities, are combined with the importance of each major fission product to the system eigenvalue (k eff ) to determine the net importance of cross sections to k eff . The importances established the following fission products, listed in descending order of priority, that are most likely to benefit burnup credit when their cross-section uncertainties are reduced: 151 Sm, 103 Rh, 155 Eu, 150 Sm, 152 Sm, 153 Eu, 154 Eu, and 143 Nd

  20. Subjective and objective image differences in pediatric computed tomography cardiac angiography using lower iodine concentration

    International Nuclear Information System (INIS)

    Hwang, Jae-Yeon; Choo, Ki Seok; Choi, Yoon Young; Kim, Jin Hyeok; Ryu, Hwaseong; Kim, Yong-Woo; Jeon, Ung Bae; Nam, Kyung Jin; Han, Junhee

    2017-01-01

    Several recent studies showed the optimal contrast enhancement with a low-concentration and iso-osmolar contrast media in both adult and pediatric patients. However, low contrast media concentrations are not routinely used due to concerns of suboptimal enhancement of cardiac structures and small vessels. To evaluate the feasibility of using iso-osmolar contrast media containing a low iodine dose for CT cardiac angiography at 80 kilovolts (kVp) in neonates and infants. The iodixanol 270 group consisted of 79 CT scans and the iopromide 370 group of 62 CT scans in patients ≤1 year old. Objective measurement of the contrast enhancement was analyzed and contrast-to-noise ratios of the ascending aorta and left ventricle were calculated. Regarding subjective measurement, a four-point scale system was devised to evaluate degrees of contrast enhancement, image noise, motion artifact and overall image quality of each image set. Reader performance for correctly differentiating iodixanol 270 and iopromide 370 by visual assessment was evaluated. Group objective and subjective measurements were nonsignificantly different. Overall sensitivity, specificity and diagnostic accuracy for correctly differentiating iodixanol 270 and iopromide 370 by visual assessment were 42.8%, 59%, and 50%, respectively. The application of iodixanol 270 achieved optimal enhancement for performing pediatric cardiac CT angiography at 80 kVp in neonates and infants. Objective measurements of contrast enhancement and subjective image quality assessments were not statistically different in the iodixanol 270 and iopromide 370 groups. (orig.)

  1. Subjective and objective image differences in pediatric computed tomography cardiac angiography using lower iodine concentration

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Jae-Yeon [Pusan National University Yangsan Hospital, Department of Radiology, Yangsan-si, Gyeongsangnam-do (Korea, Republic of); Pusan National University Yangsan Hospital, Research Institute for Convergence of Biomedical Science and Technology, Yangsan-si, Gyeongsangnam-do (Korea, Republic of); Choo, Ki Seok; Choi, Yoon Young; Kim, Jin Hyeok; Ryu, Hwaseong; Kim, Yong-Woo; Jeon, Ung Bae; Nam, Kyung Jin [Pusan National University Yangsan Hospital, Department of Radiology, Yangsan-si, Gyeongsangnam-do (Korea, Republic of); Han, Junhee [Pusan National University Yangsan Hospital, Division of Biostatistics, Research Institute for Convergence of Biomedical Science and Technology, Yangsan-si, Gyeongsangnam-do (Korea, Republic of)

    2017-05-15

    Several recent studies showed the optimal contrast enhancement with a low-concentration and iso-osmolar contrast media in both adult and pediatric patients. However, low contrast media concentrations are not routinely used due to concerns of suboptimal enhancement of cardiac structures and small vessels. To evaluate the feasibility of using iso-osmolar contrast media containing a low iodine dose for CT cardiac angiography at 80 kilovolts (kVp) in neonates and infants. The iodixanol 270 group consisted of 79 CT scans and the iopromide 370 group of 62 CT scans in patients ≤1 year old. Objective measurement of the contrast enhancement was analyzed and contrast-to-noise ratios of the ascending aorta and left ventricle were calculated. Regarding subjective measurement, a four-point scale system was devised to evaluate degrees of contrast enhancement, image noise, motion artifact and overall image quality of each image set. Reader performance for correctly differentiating iodixanol 270 and iopromide 370 by visual assessment was evaluated. Group objective and subjective measurements were nonsignificantly different. Overall sensitivity, specificity and diagnostic accuracy for correctly differentiating iodixanol 270 and iopromide 370 by visual assessment were 42.8%, 59%, and 50%, respectively. The application of iodixanol 270 achieved optimal enhancement for performing pediatric cardiac CT angiography at 80 kVp in neonates and infants. Objective measurements of contrast enhancement and subjective image quality assessments were not statistically different in the iodixanol 270 and iopromide 370 groups. (orig.)

  2. An Improved Method of Predicting Extinction Coefficients for the Determination of Protein Concentration.

    Science.gov (United States)

    Hilario, Eric C; Stern, Alan; Wang, Charlie H; Vargas, Yenny W; Morgan, Charles J; Swartz, Trevor E; Patapoff, Thomas W

    2017-01-01

    Concentration determination is an important method of protein characterization required in the development of protein therapeutics. There are many known methods for determining the concentration of a protein solution, but the easiest to implement in a manufacturing setting is absorption spectroscopy in the ultraviolet region. For typical proteins composed of the standard amino acids, absorption at wavelengths near 280 nm is due to the three amino acid chromophores tryptophan, tyrosine, and phenylalanine in addition to a contribution from disulfide bonds. According to the Beer-Lambert law, absorbance is proportional to concentration and path length, with the proportionality constant being the extinction coefficient. Typically the extinction coefficient of proteins is experimentally determined by measuring a solution absorbance then experimentally determining the concentration, a measurement with some inherent variability depending on the method used. In this study, extinction coefficients were calculated based on the measured absorbance of model compounds of the four amino acid chromophores. These calculated values for an unfolded protein were then compared with an experimental concentration determination based on enzymatic digestion of proteins. The experimentally determined extinction coefficient for the native proteins was consistently found to be 1.05 times the calculated value for the unfolded proteins for a wide range of proteins with good accuracy and precision under well-controlled experimental conditions. The value of 1.05 times the calculated value was termed the predicted extinction coefficient. Statistical analysis shows that the differences between predicted and experimentally determined coefficients are scattered randomly, indicating no systematic bias between the values among the proteins measured. The predicted extinction coefficient was found to be accurate and not subject to the inherent variability of experimental methods. We propose the use of a

  3. Method to make accurate concentration and isotopic measurements for small gas samples

    Science.gov (United States)

    Palmer, M. R.; Wahl, E.; Cunningham, K. L.

    2013-12-01

    Carbon isotopic ratio measurements of CO2 and CH4 provide valuable insight into carbon cycle processes. However, many of these studies, like soil gas, soil flux, and water head space experiments, provide very small gas sample volumes, too small for direct measurement by current constant-flow Cavity Ring-Down (CRDS) isotopic analyzers. Previously, we addressed this issue by developing a sample introduction module which enabled the isotopic ratio measurement of 40ml samples or smaller. However, the system, called the Small Sample Isotope Module (SSIM), does dilute the sample during the delivery with inert carrier gas which causes a ~5% reduction in concentration. The isotopic ratio measurements are not affected by this small dilution, but researchers are naturally interested accurate concentration measurements. We present the accuracy and precision of a new method of using this delivery module which we call 'double injection.' Two portions of the 40ml of the sample (20ml each) are introduced to the analyzer, the first injection of which flushes out the diluting gas and the second injection is measured. The accuracy of this new method is demonstrated by comparing the concentration and isotopic ratio measurements for a gas sampled directly and that same gas measured through the SSIM. The data show that the CO2 concentration measurements were the same within instrument precision. The isotopic ratio precision (1σ) of repeated measurements was 0.16 permil for CO2 and 1.15 permil for CH4 at ambient concentrations. This new method provides a significant enhancement in the information provided by small samples.

  4. Inferring biological functions of guanylyl cyclases with computational methods

    KAUST Repository

    Alquraishi, May Majed; Meier, Stuart Kurt

    2013-01-01

    A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.

  5. Inferring biological functions of guanylyl cyclases with computational methods

    KAUST Repository

    Alquraishi, May Majed

    2013-09-03

    A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.

  6. Computational Nuclear Physics and Post Hartree-Fock Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lietz, Justin [Michigan State University; Sam, Novario [Michigan State University; Hjorth-Jensen, M. [University of Oslo, Norway; Hagen, Gaute [ORNL; Jansen, Gustav R. [ORNL

    2017-05-01

    We present a computational approach to infinite nuclear matter employing Hartree-Fock theory, many-body perturbation theory and coupled cluster theory. These lectures are closely linked with those of chapters 9, 10 and 11 and serve as input for the correlation functions employed in Monte Carlo calculations in chapter 9, the in-medium similarity renormalization group theory of dense fermionic systems of chapter 10 and the Green's function approach in chapter 11. We provide extensive code examples and benchmark calculations, allowing thereby an eventual reader to start writing her/his own codes. We start with an object-oriented serial code and end with discussions on strategies for porting the code to present and planned high-performance computing facilities.

  7. Multi-Level iterative methods in computational plasma physics

    International Nuclear Information System (INIS)

    Knoll, D.A.; Barnes, D.C.; Brackbill, J.U.; Chacon, L.; Lapenta, G.

    1999-01-01

    Plasma physics phenomena occur on a wide range of spatial scales and on a wide range of time scales. When attempting to model plasma physics problems numerically the authors are inevitably faced with the need for both fine spatial resolution (fine grids) and implicit time integration methods. Fine grids can tax the efficiency of iterative methods and large time steps can challenge the robustness of iterative methods. To meet these challenges they are developing a hybrid approach where multigrid methods are used as preconditioners to Krylov subspace based iterative methods such as conjugate gradients or GMRES. For nonlinear problems they apply multigrid preconditioning to a matrix-few Newton-GMRES method. Results are presented for application of these multilevel iterative methods to the field solves in implicit moment method PIC, multidimensional nonlinear Fokker-Planck problems, and their initial efforts in particle MHD

  8. Methods for the development of large computer codes under LTSS

    International Nuclear Information System (INIS)

    Sicilian, J.M.

    1977-06-01

    TRAC is a large computer code being developed by Group Q-6 for the analysis of the transient thermal hydraulic behavior of light-water nuclear reactors. A system designed to assist the development of TRAC is described. The system consists of a central HYDRA dataset, R6LIB, containing files used in the development of TRAC, and a file maintenance program, HORSE, which facilitates the use of this dataset

  9. Easy computer assisted teaching method for undergraduate surgery

    OpenAIRE

    Agrawal, Vijay P

    2015-01-01

    Use of computers to aid or support the education or training of people has become commonplace in medical education. Recent studies have shown that it can improve learning outcomes in diagnostic abilities, clinical skills and knowledge across different learner levels from undergraduate medical education to continuing medical education. It also enhance the educational process by increasing access to learning materials, standardising the educational process, providing opportunities for asynchron...

  10. Method for calculating ionic and electronic defect concentrations in y-stabilised zirconia

    Energy Technology Data Exchange (ETDEWEB)

    Poulsen, F W [Risoe National Lab., Materials Research Dept., Roskilde (Denmark)

    1997-10-01

    A numerical (trial and error) method for calculation of concentration of ions, vacancies and ionic and electronic defects in solids (Brouwer-type diagrams) is presented. No approximations or truncations of the set of equations describing the chemistry for the various defect regions are used. Doped zirconia and doped thoria with simultaneous presence of protonic and electronic defects are taken as examples: 7 concentrations as function of oxygen partial pressure and/or water vapour partial pressure are determined. Realistic values for the equilibrium constants for equilibration with oxygen gas and water vapour, as well as for the internal equilibrium between holes and electrons were taken from the literature. The present mathematical method is versatile - it has also been employed by the author to treat more complex systems, such as perovskite structure oxides with over- and under-stoichiometry in oxygen, cation vacancies and simultaneous presence of protons. (au) 6 refs.

  11. Trace and low concentration co2 removal methods and apparatus utilizing metal organic frameworks

    KAUST Repository

    Eddaoudi, Mohamed

    2016-03-10

    In general, this disclosure describes techniques for removing trace and low concentration CO2 from fluids using SIFSIX-n-M MOFs, wherein n is at least two and M is a metal. In some embodiments, the metal is zinc or copper. Embodiments include devices comprising SIFSIX-n-M MOFs for removing CO2 from fluids. In particular, embodiments relate to devices and methods utilizing SIFSIX-n-M MOFs for removing CO2 from fluids, wherein CO2 concentration is trace. Methods utilizing SIFSIX-n-M MOFs for removing CO2 from fluids can occur in confined spaces. SIFSIX-n-M MOFs can comprise bidentate organic ligands. In a specific embodiment, SIFSIX-n-M MOFs comprise pyrazine or dipryidilacetylene ligands.

  12. The cell method a purely algebraic computational method in physics and engineering

    CERN Document Server

    Ferretti, Elena

    2014-01-01

    The Cell Method (CM) is a computational tool that maintains critical multidimensional attributes of physical phenomena in analysis. This information is neglected in the differential formulations of the classical approaches of finite element, boundary element, finite volume, and finite difference analysis, often leading to numerical instabilities and spurious results. This book highlights the central theoretical concepts of the CM that preserve a more accurate and precise representation of the geometric and topological features of variables for practical problem solving. Important applications occur in fields such as electromagnetics, electrodynamics, solid mechanics and fluids. CM addresses non-locality in continuum mechanics, an especially important circumstance in modeling heterogeneous materials. Professional engineers and scientists, as well as graduate students, are offered: A general overview of physics and its mathematical descriptions; Guidance on how to build direct, discrete formulations; Coverag...

  13. Stable numerical method in computation of stellar evolution

    International Nuclear Information System (INIS)

    Sugimoto, Daiichiro; Eriguchi, Yoshiharu; Nomoto, Ken-ichi.

    1982-01-01

    To compute the stellar structure and evolution in different stages, such as (1) red-giant stars in which the density and density gradient change over quite wide ranges, (2) rapid evolution with neutrino loss or unstable nuclear flashes, (3) hydrodynamical stages of star formation or supernova explosion, (4) transition phases from quasi-static to dynamical evolutions, (5) mass-accreting or losing stars in binary-star systems, and (6) evolution of stellar core whose mass is increasing by shell burning or decreasing by penetration of convective envelope into the core, we face ''multi-timescale problems'' which can neither be treated by simple-minded explicit scheme nor implicit one. This problem has been resolved by three prescriptions; one by introducing the hybrid scheme suitable for the multi-timescale problems of quasi-static evolution with heat transport, another by introducing also the hybrid scheme suitable for the multi-timescale problems of hydrodynamic evolution, and the other by introducing the Eulerian or, in other words, the mass fraction coordinate for evolution with changing mass. When all of them are combined in a single computer code, we can compute numerically stably any phase of stellar evolution including transition phases, as far as the star is spherically symmetric. (author)

  14. Unconventional methods of imaging: computational microscopy and compact implementations

    Science.gov (United States)

    McLeod, Euan; Ozcan, Aydogan

    2016-07-01

    In the past two decades or so, there has been a renaissance of optical microscopy research and development. Much work has been done in an effort to improve the resolution and sensitivity of microscopes, while at the same time to introduce new imaging modalities, and make existing imaging systems more efficient and more accessible. In this review, we look at two particular aspects of this renaissance: computational imaging techniques and compact imaging platforms. In many cases, these aspects go hand-in-hand because the use of computational techniques can simplify the demands placed on optical hardware in obtaining a desired imaging performance. In the first main section, we cover lens-based computational imaging, in particular, light-field microscopy, structured illumination, synthetic aperture, Fourier ptychography, and compressive imaging. In the second main section, we review lensfree holographic on-chip imaging, including how images are reconstructed, phase recovery techniques, and integration with smart substrates for more advanced imaging tasks. In the third main section we describe how these and other microscopy modalities have been implemented in compact and field-portable devices, often based around smartphones. Finally, we conclude with some comments about opportunities and demand for better results, and where we believe the field is heading.

  15. Analysis of multigrid methods on massively parallel computers: Architectural implications

    Science.gov (United States)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  16. An Overview of the Computational Physics and Methods Group at Los Alamos National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Randal Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-22

    CCS Division was formed to strengthen the visibility and impact of computer science and computational physics research on strategic directions for the Laboratory. Both computer science and computational science are now central to scientific discovery and innovation. They have become indispensable tools for all other scientific missions at the Laboratory. CCS Division forms a bridge between external partners and Laboratory programs, bringing new ideas and technologies to bear on today’s important problems and attracting high-quality technical staff members to the Laboratory. The Computational Physics and Methods Group CCS-2 conducts methods research and develops scientific software aimed at the latest and emerging HPC systems.

  17. Effect of concentrate feeding method on the performance of dairy cows in early to mid lactation.

    Science.gov (United States)

    Purcell, P J; Law, R A; Gordon, A W; McGettrick, S A; Ferris, C P

    2016-04-01

    The objective of the current study was to determine the effects of concentrate feeding method on milk yield and composition, dry matter (DM) intake (DMI), body weight and body condition score, reproductive performance, energy balance, and blood metabolites of housed (i.e., accommodated indoors) dairy cows in early to mid lactation. Eighty-eight multiparous Holstein-Friesian cows were managed on 1 of 4 concentrate feeding methods (CFM; 22 cows per CFM) for the first 21 wk postpartum. Cows on all 4 CFM were offered grass silage plus maize silage (in a 70:30 ratio on a DM basis) ad libitum throughout the study. In addition, cows had a target concentrate allocation of 11 kg/cow per day (from d 13 postpartum) via 1 of 4 CFM, consisting of (1) offered on a flat-rate basis via an out-of-parlor feeding system, (2) offered based on individual cow's milk yields in early lactation via an out-of-parlor feeding system, (3) offered as part of a partial mixed ration (target intake of 5 kg/cow per day) with additional concentrate offered based on individual cow's milk yields in early lactation via an out-of-parlor feeding system, and (4) offered as part of a partial mixed ration containing a fixed quantity of concentrate for each cow in the group. In addition, all cows were offered 1 kg/cow per day of concentrate pellets via an in-parlor feeding system. We detected no effect of CFM on concentrate or total DMI, mean daily milk yield, concentrations and yields of milk fat and protein, or metabolizable energy intakes, requirements, or balances throughout the study. We also found no effects of CFM on mean or final body weight, mean or final body condition score, conception rates to first service, or any of the blood metabolites examined. The results of this study suggest that CFM has little effect on the overall performance of higher-yielding dairy cows in early to mid lactation when offered diets based on conserved forages. Copyright © 2016 American Dairy Science Association

  18. An historical survey of computational methods in optimal control.

    Science.gov (United States)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  19. Computation of Optimal Monotonicity Preserving General Linear Methods

    KAUST Repository

    Ketcheson, David I.

    2009-07-01

    Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.

  20. Vascular Endothelial Growth Factor Concentration in Chronic Subdural Hematoma Fluid Is Related to Computed Tomography Appearance and Exudation Rate

    Science.gov (United States)

    Weigel, Ralf; Hohenstein, Axel

    2014-01-01

    Abstract Chronic subdural hematoma (CSH) is characterized by a net increase of volume over time. Major underlying mechanisms appear to be hemorrhagic episodes and a continuous exudation, which may be studied using labeled proteins to yield an exudation rate in a given patient. We tested the hypothesis that the concentration of vascular endothelial growth factor (VEGF) in hematoma fluid correlates with the rate of exudation. Concentration of VEGF was determined in 51 consecutive patients with CSH by the sandwich immune enzyme-linked immunosorbent assay technique. Mean values were correlated with exudation rates taken from the literature according to the appearance of CSH on computed tomography (CT) images. The CT appearance of each CSH was classified as hypodense, isodense, hyperdense, or mixed density. Mean VEGF concentration was highest in mixed-density hematomas (22,403±4173 pg/mL; mean±standard error of the mean; n=27), followed by isodense (9715±1287 pg/mL; n=9) and hypodense (5955±610 pg/mL; n=18) hematomas. Only 1 patient with hyperdense hematoma fulfilled the inclusion criteria, and the concentration of VEGF found in this patient was 24,200 pg/mL. There was a statistically significant correlation between VEGF concentrations and exudation rates in the four classes of CT appearance (r=0.98). The current report is the first to suggest a pathophysiological link between the VEGF concentration and the exudation rate underlying the steady increase of hematoma volume and CT appearance.With this finding, the current report adds another piece of evidence in favor of the pathophysiological role of VEGF in the development of CSH, including mechanisms contributing to hematoma growth and CT appearance. PMID:24245657

  1. Radon concentration as an indicator of the indoor air quality: development of an efficient measurement method

    International Nuclear Information System (INIS)

    Roessler, F.A.

    2015-01-01

    Document available in abstract form only. Full text of publication follows: Energy conservation regulation could lead to a reduction of the air exchange rate and also a degradation of the indoor air quality. Present methods for the estimating the indoor air quality can only be implemented with limitations. This paper presents a method that allows the estimation of the indoor air quality under normal conditions by using natural radon as an indicator. With mathematical models, the progression of the air exchange rate is estimated by using the radon concentration. Furthermore, the progression of individual air pollutants is estimated. Through series of experiments in a measurement chamber, the modelling could be verified. (author)

  2. Standard test method for determination of uranium or plutonium isotopic composition or concentration by the total evaporation method using a thermal ionization mass spectrometer

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2007-01-01

    1.1 This method describes the determination of the isotopic composition and/or the concentration of uranium and plutonium as nitrate solutions by the thermal ionization mass spectrometric (TIMS) total evaporation method. Purified uranium or plutonium nitrate solutions are loaded onto a degassed metal filament and placed in the mass spectrometer. Under computer control, ion currents are generated by heating of the filament(s). The ion beams are continually measured until the sample is exhausted. The measured ion currents are integrated over the course of the run, and normalized to a reference isotope ion current to yield isotopic ratios. 1.2 In principle, the total evaporation method should yield isotopic ratios that do not require mass bias correction. In practice, some samples may require this bias correction. When compared to the conventional TIMS method, the total evaporation method is approximately two times faster, improves precision from two to four fold, and utilizes smaller sample sizes. 1.3 The tot...

  3. A standardized method to determine the concentration of extracellular vesicles using tunable resistive pulse sensing

    Directory of Open Access Journals (Sweden)

    Robert Vogel

    2016-09-01

    Full Text Available Background: Understanding the pathogenic role of extracellular vesicles (EVs in disease and their potential diagnostic and therapeutic utility is extremely reliant on in-depth quantification, measurement and identification of EV sub-populations. Quantification of EVs has presented several challenges, predominantly due to the small size of vesicles such as exosomes and the availability of various technologies to measure nanosized particles, each technology having its own limitations. Materials and Methods: A standardized methodology to measure the concentration of extracellular vesicles (EVs has been developed and tested. The method is based on measuring the EV concentration as a function of a defined size range. Blood plasma EVs are isolated and purified using size exclusion columns (qEV and consecutively measured with tunable resistive pulse sensing (TRPS. Six independent research groups measured liposome and EV samples with the aim to evaluate the developed methodology. Each group measured identical samples using up to 5 nanopores with 3 repeat measurements per pore. Descriptive statistics and unsupervised multivariate data analysis with principal component analysis (PCA were used to evaluate reproducibility across the groups and to explore and visualise possible patterns and outliers in EV and liposome data sets. Results: PCA revealed good reproducibility within and between laboratories, with few minor outlying samples. Measured mean liposome (not filtered with qEV and EV (filtered with qEV concentrations had coefficients of variance of 23.9% and 52.5%, respectively. The increased variance of the EV concentration measurements could be attributed to the use of qEVs and the polydisperse nature of EVs. Conclusion: The results of this study demonstrate the feasibility of this standardized methodology to facilitate comparable and reproducible EV concentration measurements.

  4. Availability studies for nuclear plants: Models, methods, computer-codes

    International Nuclear Information System (INIS)

    Fischer, F.; Haussmann, W.

    1984-05-01

    Example: (in situ concept); Preconditioned LAW/MAW concentrates of a reprocessing plant in the form of pellets are mixed above ground at the site of final storage with a cement suspension and piped into a cavern where the pellet/suspension product slowly hardens. Up to the point of final storage the following processing steps should be followed: granulate manufacture, manufacture of the granulate-cement suspension, transport of the product underground. (orig.) [de

  5. New Methods for the Computational Fabrication of Appearance

    Science.gov (United States)

    2015-06-01

    constraint. The only variables are the values of u and the coecients ↵ i . In our current implementation, we chose a radial thin - plate spline basis h i (r...61] use thin - plate implicit functions to create a smooth morphing between two surfaces. Techniques more similar to our work include [4], who used...the thin - plate energy re- sulting in higher curvature in concentrated regions. On the left, minimizing the third derivative energy which results in

  6. A novel method for the in situ determination of concentration gradients in the electrolyte of Li-ion Batteries

    NARCIS (Netherlands)

    Zhou, J.; Danilov, D.; Notten, P.H.L.

    2006-01-01

    An electrochemical method has been developed for the in situ determination of concentration gradients in the electrolyte of sealed Li-ion batteries by measuring the potential difference between microreference electrodes. Formulas relating the concentration gradient and the potential difference

  7. Systematic Methods and Tools for Computer Aided Modelling

    DEFF Research Database (Denmark)

    Fedorova, Marina

    and processes can be faster, cheaper and very efficient. The developed modelling framework involves five main elements: 1) a modelling tool, that includes algorithms for model generation; 2) a template library, which provides building blocks for the templates (generic models previously developed); 3) computer......-format and COM-objects, are incorporated to allow the export and import of mathematical models; 5) a user interface that provides the work-flow and data-flow to guide the user through the different modelling tasks....

  8. Spatial Analysis Along Networks Statistical and Computational Methods

    CERN Document Server

    Okabe, Atsuyuki

    2012-01-01

    In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process

  9. Improved methods for computing masses from numerical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kronfeld, A.S.

    1989-11-22

    An important advance in the computation of hadron and glueball masses has been the introduction of non-local operators. This talk summarizes the critical signal-to-noise ratio of glueball correlation functions in the continuum limit, and discusses the case of (q{bar q} and qqq) hadrons in the chiral limit. A new strategy for extracting the masses of excited states is outlined and tested. The lessons learned here suggest that gauge-fixed momentum-space operators might be a suitable choice of interpolating operators. 15 refs., 2 tabs.

  10. Advanced Computational Methods for Thermal Radiative Heat Transfer

    Energy Technology Data Exchange (ETDEWEB)

    Tencer, John; Carlberg, Kevin Thomas; Larsen, Marvin E.; Hogan, Roy E.,

    2016-10-01

    Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weapon resp onse in fire environments.

  11. The null-event method in computer simulation

    International Nuclear Information System (INIS)

    Lin, S.L.

    1978-01-01

    The simulation of collisions of ions moving under the influence of an external field through a neutral gas to non-zero temperatures is discussed as an example of computer models of processes in which a probe particle undergoes a series of interactions with an ensemble of other particles, such that the frequency and outcome of the events depends on internal properties of the second particles. The introduction of null events removes the need for much complicated algebra, leads to a more efficient simulation and reduces the likelihood of logical error. (Auth.)

  12. Lattice QCD computations: Recent progress with modern Krylov subspace methods

    Energy Technology Data Exchange (ETDEWEB)

    Frommer, A. [Bergische Universitaet GH Wuppertal (Germany)

    1996-12-31

    Quantum chromodynamics (QCD) is the fundamental theory of the strong interaction of matter. In order to compare the theory with results from experimental physics, the theory has to be reformulated as a discrete problem of lattice gauge theory using stochastic simulations. The computational challenge consists in solving several hundreds of very large linear systems with several right hand sides. A considerable part of the world`s supercomputer time is spent in such QCD calculations. This paper presents results on solving systems for the Wilson fermions. Recent progress is reviewed on algorithms obtained in cooperation with partners from theoretical physics.

  13. Permeability computation on a REV with an immersed finite element method

    International Nuclear Information System (INIS)

    Laure, P.; Puaux, G.; Silva, L.; Vincent, M.

    2011-01-01

    An efficient method to compute permeability of fibrous media is presented. An immersed domain approach is used to represent the porous material at its microscopic scale and the flow motion is computed with a stabilized mixed finite element method. Therefore the Stokes equation is solved on the whole domain (including solid part) using a penalty method. The accuracy is controlled by refining the mesh around the solid-fluid interface defined by a level set function. Using homogenisation techniques, the permeability of a representative elementary volume (REV) is computed. The computed permeabilities of regular fibre packings are compared to classical analytical relations found in the bibliography.

  14. Developing a computer-controlled simulated digestion system to predict the concentration of metabolizable energy of feedstuffs for rooster.

    Science.gov (United States)

    Zhao, F; Ren, L Q; Mi, B M; Tan, H Z; Zhao, J T; Li, H; Zhang, H F; Zhang, Z Y

    2014-04-01

    Four experiments were conducted to evaluate the effectiveness of a computer-controlled simulated digestion system (CCSDS) for predicting apparent metabolizable energy (AME) and true metabolizable energy (TME) using in vitro digestible energy (IVDE) content of feeds for roosters. In Exp. 1, the repeatability of the IVDE assay was tested in corn, wheat, rapeseed meal, and cottonseed meal with 3 assays of each sample and each with 5 replicates of the same sample. In Exp. 2, the additivity of IVDE concentration in corn, soybean meal, and cottonseed meal was tested by comparing determined IVDE values of the complete diet with values predicted from measurements on individual ingredients. In Exp. 3, linear models to predict AME and TME based on IVDE were developed with 16 calibration samples. In Exp. 4, the accuracy of prediction models was tested by the differences between predicted and determined values for AME or TME of 6 ingredients and 4 diets. In Exp. 1, the mean CV of IVDE was 0.88% (range = 0.20 to 2.14%) for corn, wheat, rapeseed meal, and cottonseed meal. No difference in IVDE was observed between 3 assays of an ingredient, indicating that the IVDE assay is repeatable under these conditions. In Exp. 2, minimal differences (<21 kcal/kg) were observed between determined and calculated IVDE of 3 complete diets formulated with corn, soybean meal, and cottonseed meal, demonstrating that the IVDE values are additive in a complete diet. In Exp. 3, linear relationships between AME and IVDE and between TME and IVDE were observed in 16 calibration samples: AME = 1.062 × IVDE - 530 (R(2) = 0.97, residual standard deviation [RSD] = 146 kcal/kg, P < 0.001) and TME = 1.050 × IVDE - 16 (R(2) = 0.97, RSD = 148 kcal/kg, P < 0.001). Differences of less than 100 kcal/kg were observed between determined and predicted values in 10 and 9 of the 16 calibration samples for AME and TME, respectively. In Exp. 4, differences of less than 100 kcal/kg between determined and predicted

  15. Higher-Order Integral Equation Methods in Computational Electromagnetics

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Meincke, Peter

    Higher-order integral equation methods have been investigated. The study has focused on improving the accuracy and efficiency of the Method of Moments (MoM) applied to electromagnetic problems. A new set of hierarchical Legendre basis functions of arbitrary order is developed. The new basis...

  16. A computational method for the solution of one-dimensional ...

    Indian Academy of Sciences (India)

    embedding parameter p ∈ [0, 1], which is considered as a 'small parameter'. Consid- erable research work has recently been conducted in applying this method to a class of linear and nonlinear equations. This method was further developed and improved by He, and applied to nonlinear oscillators with discontinuities [1], ...

  17. The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

    Science.gov (United States)

    Beltran, Adriana; Salvador, James

    1997-01-01

    In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

  18. Computation Method Comparison for Th Based Seed-Blanket Cores

    International Nuclear Information System (INIS)

    Kolesnikov, S.; Galperin, A.; Shwageraus, E.

    2004-01-01

    This work compares two methods for calculating a given nuclear fuel cycle in the WASB configuration. Both methods use the ELCOS Code System (2-D transport code BOXER and 3-D nodal code SILWER) [4] are compared. In the first method, the cross-sections of the Seed and Blanket, needed for the 3-D nodal code are generated separately for each region by the 2-D transport code. In the second method, the cross-sections of the Seed and Blanket, needed for the 3-D nodal code are generated from Seed-Blanket Colorsets (Fig.1) calculated by the 2-D transport code. The evaluation of the error introduced by the first method is the main objective of the present study

  19. Numerical computation of FCT equilibria by inverse equilibrium method

    International Nuclear Information System (INIS)

    Tokuda, Shinji; Tsunematsu, Toshihide; Takeda, Tatsuoki

    1986-11-01

    FCT (Flux Conserving Tokamak) equilibria were obtained numerically by the inverse equilibrium method. The high-beta tokamak ordering was used to get the explicit boundary conditions for FCT equilibria. The partial differential equation was reduced to the simultaneous quasi-linear ordinary differential equations by using the moment method. The regularity conditions for solutions at the singular point of the equations can be expressed correctly by this reduction and the problem to be solved becomes a tractable boundary value problem on the quasi-linear ordinary differential equations. This boundary value problem was solved by the method of quasi-linearization, one of the shooting methods. Test calculations show that this method provides high-beta tokamak equilibria with sufficiently high accuracy for MHD stability analysis. (author)

  20. Detection of protozoa in water samples by formalin/ether concentration method.

    Science.gov (United States)

    Lora-Suarez, Fabiana; Rivera, Raul; Triviño-Valencia, Jessica; Gomez-Marin, Jorge E

    2016-09-01

    Methods to detect protozoa in water samples are expensive and laborious. We evaluated the formalin/ether concentration method to detect Giardia sp., Cryptosporidium sp. and Toxoplasma in water. In order to test the properties of the method, we spiked water samples with different amounts of each protozoa (0, 10 and 50 cysts or oocysts) in a volume of 10 L of water. Immunofluorescence assay was used for detection of Giardia and Cryptosporidium. Toxoplasma oocysts were identified by morphology. The mean percent of recovery in 10 repetitions of the entire method, in 10 samples spiked with ten parasites and read by three different observers, were for Cryptosporidium 71.3 ± 12, for Giardia 63 ± 10 and for Toxoplasma 91.6 ± 9 and the relative standard deviation of the method was of 17.5, 17.2 and 9.8, respectively. Intraobserver variation as measured by intraclass correlation coefficient, was fair for Toxoplasma, moderate for Cryptosporidium and almost perfect for Giardia. The method was then applied in 77 samples of raw and drinkable water in three different plant of water treatment. Cryptosporidium was found in 28 of 77 samples (36%) and Giardia in 31 of 77 samples (40%). Theses results identified significant differences in treatment process to reduce the presence of Giardia and Cryptosporidium. In conclusion, the formalin ether method to concentrate protozoa in water is a new alternative for low resources countries, where is urgently need to monitor and follow the presence of theses protozoa in drinkable water. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Measurement of exhalation rate of radon and radon concentration in air using open vial method

    International Nuclear Information System (INIS)

    Horiuchi, Kimiko; Ishii, Tadashi.

    1991-01-01

    It was recognized that more than half of total exposure dose on human subject is caused by radon and its decay products which originate from naturally occurring radioactive substances (1988 UNSCEAR). Since then the exhalation of radon from the ground surface has received increasing attention. The authors have developed a new method for the determination of radon in natural water using toluene extraction of radon and applying a liquid scintillation counter of an integral counting technique which is able to get the absolute counting of radon. During these studies, the authors found out that when a counting vial containing of Liquid scintillator (LS)-toluene solution, without a lid, is exposed to the atmosphere for a while, dissolution of radon clearly occurs due to high solubility of radon into toluene layer. To extend this finding for the determination of radon in the atmosphere, the authors devised a new method to actively collect the atmosphere containing radon in a glass bottle by discharging a definite amount of water in it, which is named as open-vial dynamic method. The radon concentration can be easily calculated after the necessary corrections such as the partition coefficient and others. Applying proposed method to measure the radon exhalation rate from the ground surface and radon concentration in air of the dwelling environment, radioactive mineral spring zone and various geological formation such as granitic or sedimentary rocks. (author)

  2. Determination of molybdenum (VI) in sea water with preliminary concentration by the method of ion flotation

    International Nuclear Information System (INIS)

    Andreeva, I. Yu.; Drapchinskaya, O.L.; Lebedeva, L.I.

    1985-01-01

    The purpose of this paper is to assess the feasibility of using the method of ion flotation for the concentration of microamounts of molybdenum (VI) during determination in sea water. The ion flotation method is used for the purification of industrial sewage from the ions of nonferrous metals, including molybdenum (VI) with its content of up to 50 mg/liter. A 1.10 -4 M solution of sodium molybdate in 0.1M NaOH was used. The effect of different factors on the ion flotation process of molybdenum (VI) was investigated: pH of the solution, flotation times, concentrations of surface-active substances (SAS), molybdenum (IV), extraneous salts. Data presented show that the ion flotation method in conjunction with the photometric method of determining molybdenum with brompyrogallol red (BPR) and cetylpridinium chloride (CP) (limit of detection 0.02 micrograms/liter) allows the content of molybdenum (VI) in sea water to be established with sufficient reliability and reproducibility

  3. Сlassification of methods of production of computer forensic by usage approach of graph theory

    Directory of Open Access Journals (Sweden)

    Anna Ravilyevna Smolina

    2016-06-01

    Full Text Available Сlassification of methods of production of computer forensic by usage approach of graph theory is proposed. If use this classification, it is possible to accelerate and simplify the search of methods of production of computer forensic and this process to automatize.

  4. Сlassification of methods of production of computer forensic by usage approach of graph theory

    OpenAIRE

    Anna Ravilyevna Smolina; Alexander Alexandrovich Shelupanov

    2016-01-01

    Сlassification of methods of production of computer forensic by usage approach of graph theory is proposed. If use this classification, it is possible to accelerate and simplify the search of methods of production of computer forensic and this process to automatize.

  5. Determining concentrations of 2-bromoallyl alcohol and dibromopropene in ground water using quantitative methods

    Science.gov (United States)

    Panshin, Sandra Y.

    1997-01-01

    A method for determining levels of 2-bromoallyl alcohol and 2,3-dibromopropene from ground-water samples using liquid/liquid extraction followed by gas chromatography/mass spectrometry is described. Analytes were extracted from the water using three aliquots of dichloromethane. The aliquots were combined and reduced in volume by rotary evaporation followed by evaporation using a nitrogen stream. The extracts were analyzed by capillary-column gas chromatography/mass spectrometry in the full-scan mode. Estimated method detection limits were 30 nanograms per liter for 2-bromoallyl alcohol and 10 nanograms per liter for 2,3-dibromopropene. Recoveries were determined by spiking three matrices at two concentration levels (0.540 and 5.40 micrograms per liter for 2-bromoallyl alcohol; and 0.534 and 5.34micro-grams per liter for dibromopropene). For seven replicates of each matrix at the high concentration level, the mean percent recoveries ranged from 43.9 to 64.9 percent for 2-bromoallyl alcohol, and from 87.5 to 99.3 percent for dibromopropene. At the low concentration level, the mean percent recoveries ranged from 43.8 to 95.2 percent for 2-bromoallyl alcohol, and from 71.3 to 84.9 percent for dibromopropene.

  6. Control methods of radon and its progeny concentration in indoor atmosphere

    International Nuclear Information System (INIS)

    Ramachandran, T.V.; Subba Ramu, M.C.

    1990-01-01

    Exposure to radon-222 and its progeny in indoor atmosphere can result in significant inhalation risk to the population particularly to those living in houses with much higher levels of Rn. There are three methods generally used for the control of Rn and its progeny concentration in the indoor environment: (1) restricting the radon entry, (2) reduction of indoor radon concentration by ventilation or by aircleaning and (3) removal of airborne radon progeny by aerosol reduction. Prominent process of radon entry in most of the residence appears to be the pressure driven flow of soil gas through cracks or through other openings in the basements slab or subfloor. Sealing off these openings or ventilation of the slab or subfloor spaces are the methods of reducing the radon entry rate. Indoor radon progeny levels can also be reduced by decreasing the aerosol load in the dwellings. The results of a few experiments carried out to study the reduction in the working level concentration of radon, by decreasing the aerosol load are discussed in this paper. (author). 9 tabs., 8 figs., 37 refs

  7. Practical method of calculating time-integrated concentrations at medium and large distances

    International Nuclear Information System (INIS)

    Cagnetti, P.; Ferrara, V.

    1980-01-01

    Previous reports have covered the possibility of calculating time-integrated concentrations (TICs) for a prolonged release, based on concentration estimates for a brief release. This study proposes a simple method of evaluating concentrations in the air at medium and large distances, for a brief release. It is known that the stability of the atmospheric layers close to ground level influence diffusion only over short distances. Beyond some tens of kilometers, as the pollutant cloud progressively reaches higher layers, diffusion is affected by factors other than the stability at ground level, such as wind shear for intermediate distances and the divergence and rotational motion of air masses towards the upper limit of the mesoscale and on the synoptic scale. Using the data available in the literature, expressions for sigmasub(y) and sigmasub(z) are proposed for transfer times corresponding to those for up to distances of several thousand kilometres, for two initial diffusion situations (up to distances of 10 - 20 km), those characterized by stable and neutral conditions respectively. Using this method simple hand calculations can be made for any problem relating to the diffusion of radioactive pollutants over long distances

  8. Reference interval computation: which method (not) to choose?

    Science.gov (United States)

    Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C

    2012-07-11

    When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Methods in Symbolic Computation and p-Adic Valuations of Polynomials

    Science.gov (United States)

    Guan, Xiao

    Symbolic computation has widely appear in many mathematical fields such as combinatorics, number theory and stochastic processes. The techniques created in the area of experimental mathematics provide us efficient ways of symbolic computing and verification of complicated relations. Part I consists of three problems. The first one focuses on a unimodal sequence derived from a quartic integral. Many of its properties are explored with the help of hypergeometric representations and automatic proofs. The second problem tackles the generating function of the reciprocal of Catalan number. It springs from the closed form given by Mathematica. Furthermore, three methods in special functions are used to justify this result. The third issue addresses the closed form solutions for the moments of products of generalized elliptic integrals , which combines the experimental mathematics and classical analysis. Part II concentrates on the p-adic valuations of polynomials from the perspective of trees. For a given polynomial f( n) indexed in positive integers, the package developed in Mathematica will create certain tree structure following a couple of rules. The evolution of such trees are studied both rigorously and experimentally from the view of field extension, nonparametric statistics and random matrix.

  10. Advanced methods for the computation of particle beam transport and the computation of electromagnetic fields and beam-cavity interactions

    International Nuclear Information System (INIS)

    Dragt, A.J.; Gluckstern, R.L.

    1992-11-01

    The University of Maryland Dynamical Systems and Accelerator Theory Group carries out research in two broad areas: the computation of charged particle beam transport using Lie algebraic methods and advanced methods for the computation of electromagnetic fields and beam-cavity interactions. Important improvements in the state of the art are believed to be possible in both of these areas. In addition, applications of these methods are made to problems of current interest in accelerator physics including the theoretical performance of present and proposed high energy machines. The Lie algebraic method of computing and analyzing beam transport handles both linear and nonlinear beam elements. Tests show this method to be superior to the earlier matrix or numerical integration methods. It has wide application to many areas including accelerator physics, intense particle beams, ion microprobes, high resolution electron microscopy, and light optics. With regard to the area of electromagnetic fields and beam cavity interactions, work is carried out on the theory of beam breakup in single pulses. Work is also done on the analysis of the high frequency behavior of longitudinal and transverse coupling impedances, including the examination of methods which may be used to measure these impedances. Finally, work is performed on the electromagnetic analysis of coupled cavities and on the coupling of cavities to waveguides

  11. Computational methods for constructing protein structure models from 3D electron microscopy maps.

    Science.gov (United States)

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2013-10-01

    Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Interpolation method by whole body computed tomography, Artronix 1120

    International Nuclear Information System (INIS)

    Fujii, Kyoichi; Koga, Issei; Tokunaga, Mitsuo

    1981-01-01

    Reconstruction of the whole body CT images by interpolation method was investigated by rapid scanning. Artronix 1120 with fixed collimator was used to obtain the CT images every 5 mm. X-ray source was circully movable to obtain perpendicular beam to the detector. A length of 150 mm was scanned in about 15 min., with the slice width of 5 mm. The images were reproduced every 7.5 mm, which was able to reduce every 1.5 mm when necessary. Out of 420 inspection in the chest, abdomen, and pelvis, 5 representative cases for which this method was valuable were described. The cases were fibrous histiocytoma of upper mediastinum, left adrenal adenoma, left ureter fibroma, recurrence of colon cancer in the pelvis, and abscess around the rectum. This method improved the image quality of lesions in the vicinity of the ureters, main artery, and rectum. The time required and exposure dose were reduced to 50% by this method. (Nakanishi, T.)

  13. A new method for computing the quark-gluon vertex

    International Nuclear Information System (INIS)

    Aguilar, A C

    2015-01-01

    In this talk we present a new method for determining the nonperturbative quark-gluon vertex, which constitutes a crucial ingredient for a variety of theoretical and phenomenological studies. This new method relies heavily on the exact all-order relation connecting the conventional quark-gluon vertex with the corresponding vertex of the background field method, which is Abelian-like. The longitudinal part of this latter quantity is fixed using the standard gauge technique, whereas the transverse is estimated with the help of the so-called transverse Ward identities. This method allows the approximate determination of the nonperturbative behavior of all twelve form factors comprising the quark-gluon vertex, for arbitrary values of the momenta. Numerical results are presented for the form factors in three special kinematical configurations (soft gluon and quark symmetric limit, zero quark momentum), and compared with the corresponding lattice data. (paper)

  14. Medical Data Probabilistic Analysis by Optical Computing Methods

    Directory of Open Access Journals (Sweden)

    Alexander LARKIN

    2014-06-01

    Full Text Available The purpose of this article to show the laser coherent photonics methods can be use for classification of medical information. It is shown that the holography methods can be used not only for work with images. Holographic methods can be used for processing of information provided in the universal multi-parametric form. It is shown that along with the usual correlation algorithm enable to realize a number of algorithms of classification: searching for a precedent, Hamming distance measurement, Bayes probability algorithm, deterministic and “correspondence” algorithms. Significantly, that preserves all advantages of holographic method – speed, two-dimension, record-breaking high capacity of memory, flexibility of data processing and representation of result, high radiation resistance in comparison with electronic equipment. For example is presented the result of solving one of the problems of medical diagnostics - a forecast of organism state after mass traumatic lesions.

  15. preparation of beryllia n concentrate from beryllium minerals by ion exchange method

    International Nuclear Information System (INIS)

    Shoukry, M.M.; Atrees, M. Sh.; Hashem, M.D.

    2007-01-01

    The present work is concerned with the preparation of pure Beryllia concentrate from Zabara beryl mineralization in the mica schist of Wadi El Gemal area in the eastern desert. This has been possible through application of ion exchange techniques to selectively concentrate. This method is based on the fact that the beryllium complex of ethylene diamine tetra acetic acid (EDTA) at a ph of about 3.5, is much weaker than the corresponding complexes of iron and aluminum. It was, therefore, possible to effect a complete separation of beryllium from the latter on a cation exchange resin, the studied optimum conditions of separation include a contact time of 3 minute and ph of 3.5 for the selective separation of beryllium from its EDTA solution after a prior separation of alum

  16. Fast Measurement of Methanol Concentration in Ionic Liquids by Potential Step Method

    Directory of Open Access Journals (Sweden)

    Michael L. Hainstock

    2015-01-01

    Full Text Available The development of direct methanol fuel cells required the attention to the electrolyte. A good electrolyte should not only be ionic conductive but also be crossover resistant. Ionic liquids could be a promising electrolyte for fuel cells. Monitoring methanol was critical in several locations in a direct methanol fuel cell. Conductivity could be used to monitor the methanol content in ionic liquids. The conductivity of 1-butyl-3-methylimidazolium tetrafluoroborate had a linear relationship with the methanol concentration. However, the conductivity was significantly affected by the moisture or water content in the ionic liquid. On the contrary, potential step could be used in sensing methanol in ionic liquids. This method was not affected by the water content. The sampling current at a properly selected sampling time was proportional to the concentration of methanol in 1-butyl-3-methylimidazolium tetrafluoroborate. The linearity still stood even when there was 2.4 M water present in the ionic liquid.

  17. Depth-Averaged Non-Hydrostatic Hydrodynamic Model Using a New Multithreading Parallel Computing Method

    Directory of Open Access Journals (Sweden)

    Ling Kang

    2017-03-01

    Full Text Available Compared to the hydrostatic hydrodynamic model, the non-hydrostatic hydrodynamic model can accurately simulate flows that feature vertical accelerations. The model’s low computational efficiency severely restricts its wider application. This paper proposes a non-hydrostatic hydrodynamic model based on a multithreading parallel computing method. The horizontal momentum equation is obtained by integrating the Navier–Stokes equations from the bottom to the free surface. The vertical momentum equation is approximated by the Keller-box scheme. A two-step method is used to solve the model equations. A parallel strategy based on block decomposition computation is utilized. The original computational domain is subdivided into two subdomains that are physically connected via a virtual boundary technique. Two sub-threads are created and tasked with the computation of the two subdomains. The producer–consumer model and the thread lock technique are used to achieve synchronous communication between sub-threads. The validity of the model was verified by solitary wave propagation experiments over a flat bottom and slope, followed by two sinusoidal wave propagation experiments over submerged breakwater. The parallel computing method proposed here was found to effectively enhance computational efficiency and save 20%–40% computation time compared to serial computing. The parallel acceleration rate and acceleration efficiency are approximately 1.45% and 72%, respectively. The parallel computing method makes a contribution to the popularization of non-hydrostatic models.

  18. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    Science.gov (United States)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  19. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    Science.gov (United States)

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  20. Long short-term memory neural network for air pollutant concentration predictions: Method development and evaluation.

    Science.gov (United States)

    Li, Xiang; Peng, Ling; Yao, Xiaojing; Cui, Shaolong; Hu, Yuan; You, Chengzeng; Chi, Tianhe

    2017-12-01

    Air pollutant concentration forecasting is an effective method of protecting public health by providing an early warning against harmful air pollutants. However, existing methods of air pollutant concentration prediction fail to effectively model long-term dependencies, and most neglect spatial correlations. In this paper, a novel long short-term memory neural network extended (LSTME) model that inherently considers spatiotemporal correlations is proposed for air pollutant concentration prediction. Long short-term memory (LSTM) layers were used to automatically extract inherent useful features from historical air pollutant data, and auxiliary data, including meteorological data and time stamp data, were merged into the proposed model to enhance the performance. Hourly PM 2.5 (particulate matter with an aerodynamic diameter less than or equal to 2.5 μm) concentration data collected at 12 air quality monitoring stations in Beijing City from Jan/01/2014 to May/28/2016 were used to validate the effectiveness of the proposed LSTME model. Experiments were performed using the spatiotemporal deep learning (STDL) model, the time delay neural network (TDNN) model, the autoregressive moving average (ARMA) model, the support vector regression (SVR) model, and the traditional LSTM NN model, and a comparison of the results demonstrated that the LSTME model is superior to the other statistics-based models. Additionally, the use of auxiliary data improved model performance. For the one-hour prediction tasks, the proposed model performed well and exhibited a mean absolute percentage error (MAPE) of 11.93%. In addition, we conducted multiscale predictions over different time spans and achieved satisfactory performance, even for 13-24 h prediction tasks (MAPE = 31.47%). Copyright © 2017 Elsevier Ltd. All rights reserved.