WorldWideScience

Sample records for sample contained approximately

  1. New perspectives on approximation and sampling theory Festschrift in honor of Paul Butzer's 85th birthday

    CERN Document Server

    Schmeisser, Gerhard

    2014-01-01

    Paul Butzer, who is considered the academic father and grandfather of many prominent mathematicians, has established one of the best schools in approximation and sampling theory in the world. He is one of the leading figures in approximation, sampling theory, and harmonic analysis. Although on April 15, 2013, Paul Butzer turned 85 years old, remarkably, he is still an active research mathematician. In celebration of Paul Butzer’s 85th birthday, New Perspectives on Approximation and Sampling Theory is a collection of invited chapters on approximation, sampling, and harmonic analysis written by students, friends, colleagues, and prominent active mathematicians. Topics covered include approximation methods using wavelets, multi-scale analysis, frames, and special functions. New Perspectives on Approximation and Sampling Theory requires basic knowledge of mathematical analysis, but efforts were made to keep the exposition clear and the chapters self-contained. This volume will appeal to researchers and graduate...

  2. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung

    2013-02-16

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  3. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming; Chen, Yuguo; Yu, Kai

    2013-01-01

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  4. The development of a Type B sample container

    International Nuclear Information System (INIS)

    Glass, R.E.

    1993-01-01

    Sandia National Laboratories is developing a package to support chemical agent sampling for the multilateral Chemical Weapons Convention. The package is designed to prevent the release of lethal chemical agents during international transport of chemical agents. The package is being designed to meet the IAEA requirements for Type B container. The configuration of the packaging working from the exterior to the interior is as follows. The outer shell provides a sacrificial boundary which will provide protection against the thermal and structural assaults of the hypothetical accident sequence. This shell provides all of the lifting and tie-down attachments. The closure is provided with a v-clamp. The cylindrical shell is austenitic stainless steel with standard pressure vessel heads. Internal to this shell is approximately 7 cm of ceramic fiber insulation to provide protection for the containment boundary against the all-engulfing fire. The containment vessel consists of a stainless steel cylindrical shell with pressure vessel heads at each end. The closure includes an o-ring test port to sample between an elastomeric double o-ring seal. The interior of the package can hold various teflon inserts which are machined to accept samples. The package has a mass of 35 kg and external dimension of 33 cm in length and 30 cm in diameter. The internal cavity is 10 cm in length and 10 cm in diameter. An insert can be machined to accept multiple samples of any configuration within that envelope. This paper describes the design and testing of the Type B sample container. (author)

  5. Rapid Sampling from Sealed Containers

    International Nuclear Information System (INIS)

    Johnston, R.G.; Garcia, A.R.E.; Martinez, R.K.; Baca, E.T.

    1999-01-01

    The authors have developed several different types of tools for sampling from sealed containers. These tools allow the user to rapidly drill into a closed container, extract a sample of its contents (gas, liquid, or free-flowing powder), and permanently reseal the point of entry. This is accomplished without exposing the user or the environment to the container contents, even while drilling. The entire process is completed in less than 15 seconds for a 55 gallon drum. Almost any kind of container can be sampled (regardless of the materials) with wall thicknesses up to 1.3 cm and internal pressures up to 8 atm. Samples can be taken from the top, sides, or bottom of a container. The sampling tools are inexpensive, small, and easy to use. They work with any battery-powered hand drill. This allows considerable safety, speed, flexibility, and maneuverability. The tools also permit the user to rapidly attach plumbing, a pressure relief valve, alarms, or other instrumentation to a container. Possible applications include drum venting, liquid transfer, container flushing, waste characterization, monitoring, sampling for archival or quality control purposes, emergency sampling by rapid response teams, counter-terrorism, non-proliferation and treaty verification, and use by law enforcement personnel during drug or environmental raids

  6. Sample container for neutron activation analysis

    International Nuclear Information System (INIS)

    Lersmacher, B.; Verheijke, M.L.; Jaspers, H.J.

    1983-01-01

    The sample container avoids contaminating the sample substance by diffusion of foreign matter from the wall of the sample container into the sample. It cannot be activated, so that the results of measurements are not falsified by a radioactive container wall. It consists of solid carbon. (orig./HP) [de

  7. Approximation of the exponential integral (well function) using sampling methods

    Science.gov (United States)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  8. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  9. Sampling and Low-Rank Tensor Approximation of the Response Surface

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann Georg; El-Moselhy, Tarek A.

    2013-01-01

    Most (quasi)-Monte Carlo procedures can be seen as computing some integral over an often high-dimensional domain. If the integrand is expensive to evaluate-we are thinking of a stochastic PDE (SPDE) where the coefficients are random fields and the integrand is some functional of the PDE-solution-there is the desire to keep all the samples for possible later computations of similar integrals. This obviously means a lot of data. To keep the storage demands low, and to allow evaluation of the integrand at points which were not sampled, we construct a low-rank tensor approximation of the integrand over the whole integration domain. This can also be viewed as a representation in some problem-dependent basis which allows a sparse representation. What one obtains is sometimes called a "surrogate" or "proxy" model, or a "response surface". This representation is built step by step or sample by sample, and can already be used for each new sample. In case we are sampling a solution of an SPDE, this allows us to reduce the number of necessary samples, namely in case the solution is already well-represented by the low-rank tensor approximation. This can be easily checked by evaluating the residuum of the PDE with the approximate solution. The procedure will be demonstrated in the computation of a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. © Springer-Verlag Berlin Heidelberg 2013.

  10. Chance constrained problems: penalty reformulation and performance of sample approximation technique

    Czech Academy of Sciences Publication Activity Database

    Branda, Martin

    2012-01-01

    Roč. 48, č. 1 (2012), s. 105-122 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional research plan: CEZ:AV0Z10750506 Keywords : chance constrained problems * penalty functions * asymptotic equivalence * sample approximation technique * investment problem Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.619, year: 2012 http://library.utia.cas.cz/separaty/2012/E/branda-chance constrained problems penalty reformulation and performance of sample approximation technique.pdf

  11. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  12. Sampling and Analysis of the Headspace Gas in 3013 Type Plutonium Storage Containers at Los Alamos National Laboratory

    International Nuclear Information System (INIS)

    Jackson, Jay M.; Berg, John M.; Hill, Dallas D.; Worl, Laura A.; Veirs, Douglas K.

    2012-01-01

    Department of Energy (DOE) sites have packaged approximately 5200 3013 containers to date. One of the requirements specified in DOESTD-3013, which specifies requirements for packaging plutonium bearing materials, is that the material be no greater than 0.5 weight percent moisture. The containers are robust, nested, welded vessels. A shelf life surveillance program was established to monitor these cans over their 50 year design life. In the event pressurization is detected by radiography, it will be necessary to obtain a head space gas sample from the pressurized container. This technique is also useful to study the head space gas in cans selected for random destructive evaluation. The atmosphere is sampled and the hydrogen to oxygen ratio is measured to determine the effects of radiolysis on the moisture in the container. A system capable of penetrating all layers of a 3013 container assembly and obtaining a viable sample of the enclosed gas and an estimate of internal pressure was designed.

  13. A TALE-inspired computational screen for proteins that contain approximate tandem repeats.

    Science.gov (United States)

    Perycz, Malgorzata; Krwawicz, Joanna; Bochtler, Matthias

    2017-01-01

    TAL (transcription activator-like) effectors (TALEs) are bacterial proteins that are secreted from bacteria to plant cells to act as transcriptional activators. TALEs and related proteins (RipTALs, BurrH, MOrTL1 and MOrTL2) contain approximate tandem repeats that differ in conserved positions that define specificity. Using PERL, we screened ~47 million protein sequences for TALE-like architecture characterized by approximate tandem repeats (between 30 and 43 amino acids in length) and sequence variability in conserved positions, without requiring sequence similarity to TALEs. Candidate proteins were scored according to their propensity for nuclear localization, secondary structure, repeat sequence complexity, as well as covariation and predicted structural proximity of variable residues. Biological context was tentatively inferred from co-occurrence of other domains and interactome predictions. Approximate repeats with TALE-like features that merit experimental characterization were found in a protein of chestnut blight fungus, a eukaryotic plant pathogen.

  14. A design-based approximation to the Bayes Information Criterion in finite population sampling

    Directory of Open Access Journals (Sweden)

    Enrico Fabrizi

    2014-05-01

    Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.

  15. Thermal expansion of ceramic samples containing natural zeolite

    Science.gov (United States)

    Sunitrová, Ivana; Trník, Anton

    2017-07-01

    In this study the thermal expansion of ceramic samples made from natural zeolite is investigated. Samples are prepared from the two most commonly used materials in ceramic industry (kaolin and illite). The first material is Sedlec kaolin from Czech Republic, which contains more than 90 mass% of mineral kaolinite. The second one is an illitic clay from Tokaj area in Hungary, which contains about 80 mass% of mineral illite. Varying amount of the clay (0 % - 50 %) by a natural zeolite from Nižný Hrabovec (Slovak Republic), containing clinoptilolite as major mineral phase is replaced. The measurements are performed on cylindrical samples with a diameter 14 mm and a length about 35 mm by a horizontal push - rod dilatometer. Samples made from pure kaolin, illite and zeolite are also subjected to this analysis. The temperature regime consists from linear heating rate of 5 °C/min from 30 °C to 1100 °C. The results show that the relative shrinkage of ceramic samples increases with amount of zeolite in samples.

  16. Design compliance matrix waste sample container filling system for nested, fixed-depth sampling system

    International Nuclear Information System (INIS)

    BOGER, R.M.

    1999-01-01

    This design compliance matrix document provides specific design related functional characteristics, constraints, and requirements for the container filling system that is part of the nested, fixed-depth sampling system. This document addresses performance, external interfaces, ALARA, Authorization Basis, environmental and design code requirements for the container filling system. The container filling system will interface with the waste stream from the fluidic pumping channels of the nested, fixed-depth sampling system and will fill containers with waste that meet the Resource Conservation and Recovery Act (RCRA) criteria for waste that contains volatile and semi-volatile organic materials. The specifications for the nested, fixed-depth sampling system are described in a Level 2 Specification document (HNF-3483, Rev. 1). The basis for this design compliance matrix document is the Tank Waste Remediation System (TWRS) desk instructions for design Compliance matrix documents (PI-CP-008-00, Rev. 0)

  17. Approximate determination of efficiency for activity measurements of cylindrical samples

    Energy Technology Data Exchange (ETDEWEB)

    Helbig, W [Nuclear Engineering and Analytics Rossendorf, Inc. (VKTA), Dresden (Germany); Bothe, M [Nuclear Engineering and Analytics Rossendorf, Inc. (VKTA), Dresden (Germany)

    1997-03-01

    Some calibration samples are necessary with the same geometrical parameters but of different materials, containing known activities A homogeniously distributed. Their densities are measured, their mass absorption coefficients may be unknown. These calibration samples are positioned in the counting geometry, for instance directly on the detector. The efficiency function {epsilon}(E) for each sample is gained by measuring the gamma spectra and evaluating all usable gamma energy peaks. From these {epsilon}(E) the common valid {epsilon}{sub geom}(E) will be deduced. For this purpose the functions {epsilon}{sub mu}(E) for these samples have to be established. (orig.)

  18. Super-sample covariance approximations and partial sky coverage

    Science.gov (United States)

    Lacasa, Fabien; Lima, Marcos; Aguena, Michel

    2018-04-01

    Super-sample covariance (SSC) is the dominant source of statistical error on large scale structure (LSS) observables for both current and future galaxy surveys. In this work, we concentrate on the SSC of cluster counts, also known as sample variance, which is particularly useful for the self-calibration of the cluster observable-mass relation; our approach can similarly be applied to other observables, such as galaxy clustering and lensing shear. We first examined the accuracy of two analytical approximations proposed in the literature for the flat sky limit, finding that they are accurate at the 15% and 30-35% level, respectively, for covariances of counts in the same redshift bin. We then developed a harmonic expansion formalism that allows for the prediction of SSC in an arbitrary survey mask geometry, such as large sky areas of current and future surveys. We show analytically and numerically that this formalism recovers the full sky and flat sky limits present in the literature. We then present an efficient numerical implementation of the formalism, which allows fast and easy runs of covariance predictions when the survey mask is modified. We applied our method to a mask that is broadly similar to the Dark Energy Survey footprint, finding a non-negligible negative cross-z covariance, i.e. redshift bins are anti-correlated. We also examined the case of data removal from holes due to, for example bright stars, quality cuts, or systematic removals, and find that this does not have noticeable effects on the structure of the SSC matrix, only rescaling its amplitude by the effective survey area. These advances enable analytical covariances of LSS observables to be computed for current and future galaxy surveys, which cover large areas of the sky where the flat sky approximation fails.

  19. Rollout sampling approximate policy iteration

    NARCIS (Netherlands)

    Dimitrakakis, C.; Lagoudakis, M.G.

    2008-01-01

    Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a

  20. Container for gaseous samples for irradiation at accelerators

    International Nuclear Information System (INIS)

    Kupsch, H.; Riemenschneider, J.; Leonhardt, J.

    1985-01-01

    The invention concerns a container for gaseous samples for the irradiation at accelerators especially to generate short-lived radioisotopes. The container is also suitable for storage and transport of the target gas and can be multiply reused

  1. Development of sealed sample containers and high resolution micro-tomography

    Energy Technology Data Exchange (ETDEWEB)

    Uesugi, Kentaro, E-mail: ueken@spring8.or.jp; Takeuchi, Akihisa; Suzuki, Yoshio [Japan synchrotron radiation research institute, JASRI/SPring-8 Kouto 1-1-1, Sayo, Hyogo 679-5198 Japan (Japan); Uesugi, Masayuki [Japan Aerospace Exploration Agency (ISAS/JAXA), Sagamihara, Kanagawa 252-5210 (Japan); Hamada, Hiroshi [NTT Advanced technology Corporation, Atsugi, Kanagawa 243-0124 (Japan)

    2016-01-28

    A sample container and a high resolution micro-tomography system have been developed at BL47XU at SPring-8. The container is made of a SiN membrane in a shape of truncated pyramid, which makes it possible to exclude oxygen and moisture in the air. The sample rotation stage for tomography is set downward to keep the sample in the container without any glue. The spatial resolution and field of view are 300 nm and 110 μm using a Fresnel zone plate objective with an outermost zone width of 100 nm at 8 keV, respectively. The scan time is about 20 minutes for 1800 projections. A 3-D image of an asteroid particle was successfully obtained without adhesive and contamination.

  2. Axially perpendicular offset Raman scheme for reproducible measurement of housed samples in a noncircular container under variation of container orientation.

    Science.gov (United States)

    Duy, Pham K; Chang, Kyeol; Sriphong, Lawan; Chung, Hoeil

    2015-03-17

    An axially perpendicular offset (APO) scheme that is able to directly acquire reproducible Raman spectra of samples contained in an oval container under variation of container orientation has been demonstrated. This scheme utilized an axially perpendicular geometry between the laser illumination and the Raman photon detection, namely, irradiation through a sidewall of the container and gathering of the Raman photon just beneath the container. In the case of either backscattering or transmission measurements, Raman sampling volumes for an internal sample vary when the orientation of an oval container changes; therefore, the Raman intensities of acquired spectra are inconsistent. The generated Raman photons traverse the same bottom of the container in the APO scheme; the Raman sampling volumes can be relatively more consistent under the same situation. For evaluation, the backscattering, transmission, and APO schemes were simultaneously employed to measure alcohol gel samples contained in an oval polypropylene container at five different orientations and then the accuracies of the determination of the alcohol concentrations were compared. The APO scheme provided the most reproducible spectra, yielding the best accuracy when the axial offset distance was 10 mm. Monte Carlo simulations were performed to study the characteristics of photon propagation in the APO scheme and to explain the origin of the optimal offset distance that was observed. In addition, the utility of the APO scheme was further demonstrated by analyzing samples in a circular glass container.

  3. A novel condition for stable nonlinear sampled-data models using higher-order discretized approximations with zero dynamics.

    Science.gov (United States)

    Zeng, Cheng; Liang, Shan; Xiang, Shuwen

    2017-05-01

    Continuous-time systems are usually modelled by the form of ordinary differential equations arising from physical laws. However, the use of these models in practice and utilizing, analyzing or transmitting these data from such systems must first invariably be discretized. More importantly, for digital control of a continuous-time nonlinear system, a good sampled-data model is required. This paper investigates the new consistency condition which is weaker than the previous similar results presented. Moreover, given the stability of the high-order approximate model with stable zero dynamics, the novel condition presented stabilizes the exact sampled-data model of the nonlinear system for sufficiently small sampling periods. An insightful interpretation of the obtained results can be made in terms of the stable sampling zero dynamics, and the new consistency condition is surprisingly associated with the relative degree of the nonlinear continuous-time system. Our controller design, based on the higher-order approximate discretized model, extends the existing methods which mainly deal with the Euler approximation. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Standard practice for sampling special nuclear materials in multi-container lots

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1987-01-01

    1.1 This practice provides an aid in designing a sampling and analysis plan for the purpose of minimizing random error in the measurement of the amount of nuclear material in a lot consisting of several containers. The problem addressed is the selection of the number of containers to be sampled, the number of samples to be taken from each sampled container, and the number of aliquot analyses to be performed on each sample. 1.2 This practice provides examples for application as well as the necessary development for understanding the statistics involved. The uniqueness of most situations does not allow presentation of step-by-step procedures for designing sampling plans. It is recommended that a statistician experienced in materials sampling be consulted when developing such plans. 1.3 The values stated in SI units are to be regarded as the standard. 1.4 This standard does not purport to address all of the safety problems, if any, associated with its use. It is the responsibility of the user of this standar...

  5. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  6. Characterization Data Package for Containerized Sludge Samples Collected from Engineered Container SCS-CON-210

    Energy Technology Data Exchange (ETDEWEB)

    Fountain, Matthew S.; Fiskum, Sandra K.; Baldwin, David L.; Daniel, Richard C.; Bos, Stanley J.; Burns, Carolyn A.; Carlson, Clark D.; Coffey, Deborah S.; Delegard, Calvin H.; Edwards, Matthew K.; Greenwood, Lawrence R.; Neiner, Doinita; Oliver, Brian M.; Pool, Karl N.; Schmidt, Andrew J.; Shimskey, Rick W.; Sinkov, Sergey I.; Snow, Lanee A.; Soderquist, Chuck Z.; Thompson, Christopher J.; Trang-Le, Truc LT; Urie, Michael W.

    2013-09-10

    This data package contains the K Basin sludge characterization results obtained by Pacific Northwest National Laboratory during processing and analysis of four sludge core samples collected from Engineered Container SCS-CON-210 in 2010 as requested by CH2M Hill Plateau Remediation Company. Sample processing requirements, analytes of interest, detection limits, and quality control sample requirements are defined in the KBC-33786, Rev. 2. The core processing scope included reconstitution of a sludge core sample distributed among four to six 4-L polypropylene bottles into a single container. The reconstituted core sample was then mixed and subsampled to support a variety of characterization activities. Additional core sludge subsamples were combined to prepare a container composite. The container composite was fractionated by wet sieving through a 2,000 micron mesh and a 500-micron mesh sieve. Each sieve fraction was sampled to support a suite of analyses. The core composite analysis scope included density determination, radioisotope analysis, and metals analysis, including the Waste Isolation Pilot Plant Hazardous Waste Facility Permit metals (with the exception of mercury). The container composite analysis included most of the core composite analysis scope plus particle size distribution, particle density, rheology, and crystalline phase identification. A summary of the received samples, core sample reconstitution and subsampling activities, container composite preparation and subsampling activities, physical properties, and analytical results are presented. Supporting data and documentation are provided in the appendices. There were no cases of sample or data loss and all of the available samples and data are reported as required by the Quality Assurance Project Plan/Sampling and Analysis Plan.

  7. Test of a sample container for shipment of small size plutonium samples with PAT-2

    International Nuclear Information System (INIS)

    Kuhn, E.; Aigner, H.; Deron, S.

    1981-11-01

    A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO 2 -powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)

  8. Hot sample archiving. Revision 3

    International Nuclear Information System (INIS)

    McVey, C.B.

    1995-01-01

    This Engineering Study revision evaluated the alternatives to provide tank waste characterization analytical samples for a time period as recommended by the Tank Waste Remediation Systems Program. The recommendation of storing 40 ml segment samples for a period of approximately 18 months (6 months past the approval date of the Tank Characterization Report) and then composite the core segment material in 125 ml containers for a period of five years. The study considers storage at 222-S facility. It was determined that the critical storage problem was in the hot cell area. The 40 ml sample container has enough material for approximately 3 times the required amount for a complete laboratory re-analysis. The final result is that 222-S can meet the sample archive storage requirements. During the 100% capture rate the capacity is exceeded in the hot cell area, but quick, inexpensive options are available to meet the requirements

  9. Approximation techniques for engineers

    CERN Document Server

    Komzsik, Louis

    2006-01-01

    Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.

  10. Sampling and decontamination plan for the Transuranic Storage Area--1/-R container storage unit

    International Nuclear Information System (INIS)

    Barry, G.A.

    1992-11-01

    This document describes the sampling and decontamination of the Transuranic Storage Area (TSA)-l/-R container storage area and the earthen-covered portion of the TSA-2 container storage unit at the Radioactive Waste Management Complex. Stored containers from the earthen-covered asphalt pads will be retrieved from the TSA-l/-R and TSA-2 container storage units. Container retrieval will be conducted under the TSA retrieval enclosure, a fabricated steel building to be constructed over the earthen-covered pad to provide containment and weather protection. Following container retrieval, the TSA retrieval enclosure will be decontaminated to remove radioactive and hazardous contamination. The underlying soils will be sampled and analyzed to determine whether any contaminated soils require removal

  11. Sample summary report for ARG 1 pressure tube sample

    International Nuclear Information System (INIS)

    Belinco, C.

    2006-01-01

    The ARG 1 sample is made from an un-irradiated Zr-2.5% Nb pressure tube. The sample has 103.4 mm ID, 112 mm OD and approximately 500 mm length. A punch mark was made very close to one end of the sample. The punch mark indicates the 12 O'clock position and also identifies the face of the tube for making all the measurements. ARG 1 sample contains flaws on ID and OD surface. There was no intentional flaw within the wall of the pressure tube sample. Once the flaws are machined the pressure tube sample was covered from outside to hide the OD flaws. Approximately 50 mm length of pressure tube was left open at both the ends to facilitate the holding of sample in the fixtures for inspection. No flaw was machined in this zone of 50 mm on either end of the pressure tube sample. A total of 20 flaws were machined in ARG 1 sample. Out of these, 16 flaws were on the OD surface and the remaining 4 on the ID surface of the pressure tube. The flaws were characterized in to various groups like axial flaws, circumferential flaws, etc

  12. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef

    2017-06-30

    Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.

  13. Use of individual containers for prostate biopsy samples: Do we gain diagnostic performance?

    Science.gov (United States)

    Panach-Navarrete, J; García-Morata, F; Valls-González, L; Martínez-Jabaloyas, J M

    2016-05-01

    Prostate cores from transrectal biopsies are usually sent in separate vials for pathological processing. Although this is a common practice, there are controversial studies on its usefulness. We wanted to compare the rate of prostate cancer diagnosis between processing samples in 2 containers and processing them in individual containers to see if there are differences. Our secondary objective was to check the rate of diagnosis of various tumour subtypes in each of the 2 groups. A retrospective observational study was conducted of 2,601 cases of prostate biopsies. Ten cores were extracted in each biopsy. We divided the sample into 2 groups: biopsies sent in 2 containers to the department of pathology (left and right lobes) or sent in 10 (one for each cylinder), according to the different criteria used in our centre in the past. We then classified the cases according to the absence of neoplasia, insignificant tumour (involvement of just 1 cylinder, container group, and 824 were included in the 10-container group. We diagnosed a rate of 32.4% of cancers in the 2-container group and 40% in the 10-container group, a difference that was statistically significant (Pcontainer group than in the 10-container group (6.4% vs. 4.3%, respectively; P=.03). Samples with a Gleason score of 6 were diagnosed more often in the 10-container group than in the 2-container group (11.9% vs. 8.1%, respectively; P=.002). The same occurred with the Gleason score≥7 (23.8% in the 10-container group vs. 17.9% in the 2-container group; Pcontainers. Once the procedure was conducted, we also observed in our series a reduction in the diagnoses of insignificant carcinoma to the detriment of an increased diagnosis of not insignificant carcinomas. Copyright © 2015 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  14. Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.

    Science.gov (United States)

    Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E

    2018-06-01

    An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.

  15. Np Analysis in IAT-Samples Containing <10 Microgram Pu

    International Nuclear Information System (INIS)

    Ludwig, R.; Raab, W.; Dashdondog, J.; Balsley, S.

    2008-01-01

    A method for the determination of neptunium to plutonium in safeguards samples containing less than 10 microgram Pu is presented. The chemical treatment and the optimized measurement conditions for gamma spectrometry are reported. This method is based on thermal ionization mass spectrometry (TIMS) after chemical treatment and separation and was validated with mixtures of U, Pu and Np certified reference materials and using the 237 Np standard addition method, followed by separation of the waste fraction and gamma spectrometric analysis. The highest sensitivity, precision and accuracy in the determination of the Np:Pu ratio at microgram levels of Pu is achieved by evaluating 241 Pu and 233 Pa after measuring the adsorbent with a well-type gamma detector 3 weeks after chemical treatment. The repeatability of determining the Np:Pu ratio is estimated to be 5%, the maximum uncertainty as determined from comparing the 4 measurement modes is within ± 10% for samples containing 3 μg Pu, while being within ± 20% for 0.4 μg Pu. (authors)

  16. Np Analysis in IAT-Samples Containing <10 Microgram Pu

    Energy Technology Data Exchange (ETDEWEB)

    Ludwig, R.; Raab, W.; Dashdondog, J.; Balsley, S. [IAEA, Safeguards Analytical Laboratory, Wagramer Str. 5, P.O. Box 100, A-1400 Vienna (Austria)

    2008-07-01

    A method for the determination of neptunium to plutonium in safeguards samples containing less than 10 microgram Pu is presented. The chemical treatment and the optimized measurement conditions for gamma spectrometry are reported. This method is based on thermal ionization mass spectrometry (TIMS) after chemical treatment and separation and was validated with mixtures of U, Pu and Np certified reference materials and using the {sup 237}Np standard addition method, followed by separation of the waste fraction and gamma spectrometric analysis. The highest sensitivity, precision and accuracy in the determination of the Np:Pu ratio at microgram levels of Pu is achieved by evaluating {sup 241}Pu and {sup 233}Pa after measuring the adsorbent with a well-type gamma detector 3 weeks after chemical treatment. The repeatability of determining the Np:Pu ratio is estimated to be 5%, the maximum uncertainty as determined from comparing the 4 measurement modes is within {+-} 10% for samples containing 3 {mu}g Pu, while being within {+-} 20% for 0.4 {mu}g Pu. (authors)

  17. Exact constants in approximation theory

    CERN Document Server

    Korneichuk, N

    1991-01-01

    This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base

  18. Determination of uranium in samples containing bulk aluminium

    International Nuclear Information System (INIS)

    Das, S.K.; Kannan, R.; Dhami, P.S.; Tripathi, S.C.; Gandhi, P.M.

    2015-01-01

    The determination of uranium is of great importance in PUREX process and need to be analyzed at different concentration ranges depending on the stage of reprocessing. Various techniques like volumetry, spectrophotometry, ICP-OES, fluorimetry, mass spectrometry etc. are used for the measurement of uranium in these samples. Fast and sensitive methods suitable for low level detection of uranium are desirable to cater the process needs. Microgram quantities of uranium are analyzed by spectrophotometric method using 2-(5- bromo-2-pyridylazo-5-diethylaminophenol) (Br-PADAP) as the complexing agent. But, the presence of some of the metal ions viz. Al, Pu, Zr etc. interferes in its analysis. Therefore, separation of uranium from such interfering metal ions is required prior to its analysis. This paper describes the analysis of uranium in samples containing aluminium as major matrix

  19. Production of vegetation samples containing radionuclides gamma emitters to attend the interlaboratory programs

    International Nuclear Information System (INIS)

    Souza, Poliana Santos de

    2016-01-01

    The production of environmental samples such as soil, sediment, water and vegetation with radionuclides for intercomparison tests is a very important contribution to environmental monitoring. Laboratories that carry out such monitoring need to demonstrate that their results are reliable. The IRD National Intercomparison Program (PNI) produces and distributes environmental samples containing radionuclides used to check the laboratories performance. This work demonstrates the feasibility of producing vegetation (grass) samples containing 60 Co, 65 Zn, 134 Cs, and 137 Cs by the spike sample method for the PNI. The preparation and the statistical tests followed the ISO guides 34 and 35 recommendations. The grass samples were dried, ground and passed through a sieve of 250 μm. 500 g of vegetation was treated in each procedure. Samples were treated by two different procedures:1) homogenizing of the radioactive solution containing vegetation by hand and drying in an oven and 2) homogenizing of the radioactive solution containing the vegetation in a rotatory evaporator and drying in an oven. The theoretical activity concentration of the radionuclides in the grass had a range of 593 Bq/kg to 683 Bq/kg. After gamma spectrometry analysis the results of both procedures were compared as accuracy, precision, homogeneity and stability. The accuracy, precision and short time stability from both methods were similar but the homogeneity test of the evaporation method was not approved for the radionuclides 60 Co and 134 Cs. Based on comparisons between procedures was chosen the manual agitation procedure for the grass sample for the PNI. The accuracy of the procedure, represented by the uncertainty and based on theoretical value had a range between -1.1 and 5.1% and the precision between 0.6 a 6.5 %. This result show is the procedure chosen for the production of grass samples for PNI. (author)

  20. Approximate and renormgroup symmetries

    International Nuclear Information System (INIS)

    Ibragimov, Nail H.; Kovalev, Vladimir F.

    2009-01-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  1. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    OpenAIRE

    Liu Yang; Yao Xiong; Xiao-jiao Tong

    2017-01-01

    We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...

  2. Development of rubidium and niobium containing plastic foams. Final report

    International Nuclear Information System (INIS)

    Botham, R.A.; McClung, C.E.; Schwendeman, J.I.

    1978-01-01

    Rubidium fluoride and niobium metal-containing foam samples (rods and sheets) were prepared using two foam sytems: (1) hydrophilic polyurethanes prepared from W.R. Grace Co.'s Hypol prepolymers and (2) polyimides prepared from Monsanto Company's Skybond polyimide resin. The first system was used only for preparation of rubidium fluoride-containing foams while the second was used for both rubidium fluoride and niobium-containing foams. The niobium metal could readily be incorporated into the polyimide foam during molding, to produce foam sheets of the required dimensions and density. The rubidium fluoride-containing polyimide foams were preferably prepared by first rendering the molded polyimide foam hydrophilic with a postcuring treatment, then absorbing the rubidium fluoride from water solution. Similarly, rubidium fluoride was absorbed into the hydrophilic polyurethanes from water solution. Since the high reactive rubidium metal could not be employed, rubidium fluoride, which is very hygroscopic, was used instead, primarily because of its high rubidium content (approximately 82 weight percent). This was important in view of the low total densities and the high weight percentage rubidium required in the foam samples. In addition, at the later request of LLL, a block of rigid Hypol hydrophilic polyurethane foam (with a density of approximately 0.04 g/cm 3 and cell sizes = or <0.2 mm) was prepared without any metal or metal compounds in it. Two shipments of foam samples, which met or closely approximated the project specifications, were submitted to LLL during the course of this project. Information on these samples is contained in Table 1. A complete description of their preparation is given in the Experimental Results and Discussion Section

  3. Hayabusa2 Sample Catcher and Container: Metal-Seal System for Vacuum Encapsulation of Returned Samples with Volatiles and Organic Compounds Recovered from C-Type Asteroid Ryugu

    Science.gov (United States)

    Okazaki, Ryuji; Sawada, Hirotaka; Yamanouchi, Shinji; Tachibana, Shogo; Miura, Yayoi N.; Sakamoto, Kanako; Takano, Yoshinori; Abe, Masanao; Itoh, Shoichi; Yamada, Keita; Yabuta, Hikaru; Okamoto, Chisato; Yano, Hajime; Noguchi, Takaaki; Nakamura, Tomoki; Nagao, Keisuke

    2017-07-01

    The spacecraft Hayabusa2 was launched on December 3, 2014, to collect and return samples from a C-type asteroid, 162173 Ryugu (provisional designation, 1999 JU3). It is expected that the samples collected contain organic matter and water-bearing minerals and have key information to elucidate the origin and history of the Solar System and the evolution of bio-related organics prior to delivery to the early Earth. In order to obtain samples with volatile species without terrestrial contamination, based on lessons learned from the Hayabusa mission, the sample catcher and container of Hayabusa2 were refined from those used in Hayabusa. The improvements include (1) a mirror finish of the inner wall surface of the sample catcher and the container, (2) adoption of an aluminum metal sealing system, and (3) addition of a gas-sampling interface for gas collection and evacuation. The former two improvements were made to limit contamination of the samples by terrestrial atmosphere below 1 Pa after the container is sealed. The gas-sampling interface will be used to promptly collect volatile species released from the samples in the sample container after sealing of the container. These improvements maintain the value of the returned samples.

  4. Approximate and renormgroup symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling

    2009-07-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  5. Self-similar factor approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.; Sornette, D.

    2003-01-01

    The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties

  6. Ordered cones and approximation

    CERN Document Server

    Keimel, Klaus

    1992-01-01

    This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.

  7. International Conference Approximation Theory XIV

    CERN Document Server

    Schumaker, Larry

    2014-01-01

    This volume developed from papers presented at the international conference Approximation Theory XIV,  held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.

  8. Adventitious Carbon on Primary Sample Containment Metal Surfaces

    Science.gov (United States)

    Calaway, M. J.; Fries, M. D.

    2015-01-01

    Future missions that return astromaterials with trace carbonaceous signatures will require strict protocols for reducing and controlling terrestrial carbon contamination. Adventitious carbon (AC) on primary sample containers and related hardware is an important source of that contamination. AC is a thin film layer or heterogeneously dispersed carbonaceous material that naturally accrues from the environment on the surface of atmospheric exposed metal parts. To test basic cleaning techniques for AC control, metal surfaces commonly used for flight hardware and curating astromaterials at JSC were cleaned using a basic cleaning protocol and characterized for AC residue. Two electropolished stainless steel 316L (SS- 316L) and two Al 6061 (Al-6061) test coupons (2.5 cm diameter by 0.3 cm thick) were subjected to precision cleaning in the JSC Genesis ISO class 4 cleanroom Precision Cleaning Laboratory. Afterwards, the samples were analyzed by X-ray photoelectron spectroscopy (XPS) and Raman spectroscopy.

  9. International Conference Approximation Theory XV

    CERN Document Server

    Schumaker, Larry

    2017-01-01

    These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...

  10. High pressure sample container for thermal neutron spectroscopy and diffraction on strongly scattering fluids

    International Nuclear Information System (INIS)

    Verkerk, P.; Pruisken, A.M.M.

    1979-01-01

    A description is presented of the construction and performance of a container for thermal neutron scattering on a fluid sample with about 1.5 cm -1 macroscopic cross section (neglecting absorption). The maximum pressure is about 900 bar. The container is made of 5052 aluminium capillary with inner diameter 0.75 mm and wall thickness 0.25 mm; it covers a neutron beam with a cross section of 9 X 2.5 cm 2 . The container has been successfully used in neutron diffraction and time-of-flight experiments on argon-36 at 120 K and several pressures up to 850 bar. It is shown that during these measurements the temperature gradient over the sample as well as the error in the absolute temperature were both less than 0.05 K. Subtraction of the Bragg peaks due to container scattering in diffraction experiments may be dfficult, but seems feasible because of the small amount of aluminium in the neutron beam. Correction for container scattering and multiple scattering in time-of-flight experiments may be difficult only in the case of coherently scattering samples and small scattering angles. (Auth.)

  11. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  12. Sparse linear models: Variational approximate inference and Bayesian experimental design

    International Nuclear Information System (INIS)

    Seeger, Matthias W

    2009-01-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  13. Sparse linear models: Variational approximate inference and Bayesian experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)

    2009-12-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  14. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2017-01-01

    Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.

  15. Approximation for maximum pressure calculation in containment of PWR reactors

    International Nuclear Information System (INIS)

    Souza, A.L. de

    1989-01-01

    A correlation was developed to estimate the maximum pressure of dry containment of PWR following a Loss-of-Coolant Accident - LOCA. The expression proposed is a function of the total energy released to the containment by the primary circuit, of the free volume of the containment building and of the total surface are of the heat-conducting structures. The results show good agreement with those present in Final Safety Analysis Report - FSAR of several PWR's plants. The errors are in the order of ± 12%. (author) [pt

  16. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-01-01

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  17. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-06-23

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  18. Bounded-Degree Approximations of Stochastic Networks

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar

    2017-06-01

    We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identify the r-best approximations among these classes, enabling robust decision making.

  19. Laser-induced breakdown spectroscopy for the real-time analysis of mixed waste samples containing Sr

    International Nuclear Information System (INIS)

    Barefield, J.E. II; Koskelo, A.C.; Multari, R.A.; Cremers, D.A.; Gamble, T.K.; Han, C.Y.

    1995-01-01

    In this report, the use of Laser-induced breakdown spectroscopy to analyze mixed waste samples containing Sr is discussed. The mixed waste samples investigated include vitrified waste glass and contaminated soil. Compared to traditional analysis techniques, the laser-based method is fast (i.e., analysis times on the order of minutes) and essentially waste free since little or no sample preparation is required. Detection limits on the order of pmm Sr were determined. Detection limits obtained using a fiber optic cable to deliver laser pulses to soil samples containing Cr, Zr, Pb, Be, Cu, and Ni will also be discussed

  20. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    Science.gov (United States)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  1. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    Science.gov (United States)

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  2. A hydrogen leak-tight, transparent cryogenic sample container for ultracold-neutron transmission measurements

    Science.gov (United States)

    Döge, Stefan; Hingerl, Jürgen

    2018-03-01

    The improvement of the number of extractable ultracold neutrons (UCNs) from converters based on solid deuterium (sD2) crystals requires a good understanding of the UCN transport and how the crystal's morphology influences its transparency to the UCNs. Measurements of the UCN transmission through cryogenic liquids and solids of interest, such as hydrogen (H2) and deuterium (D2), require sample containers with thin, highly polished and optically transparent windows and a well defined sample thickness. One of the most difficult sealing problems is that of light gases like hydrogen and helium at low temperatures against high vacuum. Here we report on the design of a sample container with two 1 mm thin amorphous silica windows cold-welded to aluminum clamps using indium wire gaskets, in order to form a simple, reusable, and hydrogen-tight cryogenic seal. The container meets the above-mentioned requirements and withstands up to 2 bar hydrogen gas pressure against isolation vacuum in the range of 10-5 to 10-7 mbar at temperatures down to 4.5 K. Additionally, photographs of the crystallization process are shown and discussed.

  3. A hydrogen leak-tight, transparent cryogenic sample container for ultracold-neutron transmission measurements.

    Science.gov (United States)

    Döge, Stefan; Hingerl, Jürgen

    2018-03-01

    The improvement of the number of extractable ultracold neutrons (UCNs) from converters based on solid deuterium (sD 2 ) crystals requires a good understanding of the UCN transport and how the crystal's morphology influences its transparency to the UCNs. Measurements of the UCN transmission through cryogenic liquids and solids of interest, such as hydrogen (H 2 ) and deuterium (D 2 ), require sample containers with thin, highly polished and optically transparent windows and a well defined sample thickness. One of the most difficult sealing problems is that of light gases like hydrogen and helium at low temperatures against high vacuum. Here we report on the design of a sample container with two 1 mm thin amorphous silica windows cold-welded to aluminum clamps using indium wire gaskets, in order to form a simple, reusable, and hydrogen-tight cryogenic seal. The container meets the above-mentioned requirements and withstands up to 2 bar hydrogen gas pressure against isolation vacuum in the range of 10 -5 to 10 -7 mbar at temperatures down to 4.5 K. Additionally, photographs of the crystallization process are shown and discussed.

  4. Operator approximant problems arising from quantum theory

    CERN Document Server

    Maher, Philip J

    2017-01-01

    This book offers an account of a number of aspects of operator theory, mainly developed since the 1980s, whose problems have their roots in quantum theory. The research presented is in non-commutative operator approximation theory or, to use Halmos' terminology, in operator approximants. Focusing on the concept of approximants, this self-contained book is suitable for graduate courses.

  5. The shielding properties of the newly developed container for transport of samples contaminated with CBRN substances

    International Nuclear Information System (INIS)

    Fisera, O.; Kares, J.

    2014-01-01

    A container for transport of environmental samples to the analytical laboratory is being developed as part of the development of system for collection and transport of samples contaminated with chemical, biological, radioactive and nuclear (CBRN) substances after CBRN incidents. The proposed system corresponds with current requirements of NATO publication AEP-66. The proposed container will meet the requirements of mechanical stability and tightness for the packaging of the chemical, biological and radioactive substances. Verification of shielding properties and satisfaction of requirements of radiation protection during transport of potentially relatively high active samples was the aim of this part of research. The results, together with a wall thickness of the inner steel container, the inner lining and the outer transport package, give excellent assumption that the radiation protection requirements for the proposed container and transport package will be satisfied. (authors)

  6. Gamma spectroscopy analysis of archived Marshall Island soil samples

    International Nuclear Information System (INIS)

    Herman, S.; Hoffman, K.; Lavelle, K.; Trauth, A.; Glover, S.E.; Connick, W.; Spitz, H.; LaMont, S.P.; Hamilton, T.

    2016-01-01

    Four samples of archival Marshall Islands soil were subjected to non-destructive, broad energy (17 keV-2.61 MeV) gamma-ray spectrometry analysis using a series of different high-resolution germanium detectors. These archival samples were collected in 1967 from different locations on Bikini Atoll and were contaminated with a range of fission and activation products, and other nuclear material from multiple weapons tests. Unlike samples collected recently, these samples have been stored in sealed containers and have been unaffected by approximately 50 years of weathering. Initial results show that the samples contained measurable but proportionally different concentrations of plutonium, 241 Am, and 137 Cs, and 60 Co. (author)

  7. Suitability of selected free-gas and dissolved-gas sampling containers for carbon isotopic analysis.

    Science.gov (United States)

    Eby, P; Gibson, J J; Yi, Y

    2015-07-15

    Storage trials were conducted for 2 to 3 months using a hydrocarbon and carbon dioxide gas mixture with known carbon isotopic composition to simulate typical hold times for gas samples prior to isotopic analysis. A range of containers (both pierced and unpierced) was periodically sampled to test for δ(13)C isotopic fractionation. Seventeen containers were tested for free-gas storage (20°C, 1 atm pressure) and 7 containers were tested for dissolved-gas storage, the latter prepared by bubbling free gas through tap water until saturated (20°C, 1 atm) and then preserved to avoid biological activity by acidifying to pH 2 with phosphoric acid and stored in the dark at 5°C. Samples were extracted using valves or by piercing septa, and then introduced into an isotope ratio mass spectrometer for compound-specific δ(13)C measurements. For free gas, stainless steel canisters and crimp-top glass serum bottles with butyl septa were most effective at preventing isotopic fractionation (pierced and unpierced), whereas silicone and PTFE-butyl septa allowed significant isotopic fractionation. FlexFoil and Tedlar bags were found to be effective only for storage of up to 1 month. For dissolved gas, crimp-top glass serum bottles with butyl septa were again effective, whereas silicone and PTFE-butyl were not. FlexFoil bags were reliable for up to 2 months. Our results suggest a range of preferred containers as well as several that did not perform very well for isotopic analysis. Overall, the results help establish better QA/QC procedures to avoid isotopic fractionation when storing environmental gas samples. Recommended containers for air transportation include steel canisters and glass serum bottles with butyl septa (pierced and unpierced). Copyright © 2015 John Wiley & Sons, Ltd.

  8. A new and self-contained presentation of the theory of boundary operators for slit diffraction and their logarithmic approximations

    Energy Technology Data Exchange (ETDEWEB)

    Gorenflo, Norbert [Beuth Hochschule fuer Technik Berlin (Germany). Fachbereich II; Kunik, Matthias [Magdeburg Univ. (Germany). Inst. fuer Analysis und Numerik

    2009-07-01

    We present a new and self-contained theory for mapping properties of the boundary operators for slit diffraction occurring in Sommerfeld's diffraction theory, covering two different cases of the polarisation of the light. This theory is entirely developed in the context of the boundary operators with a Hankel kernel and not based on the corresponding mixed boundary value problem for the Helmholtz equation. For a logarithmic approximation of the Hankel kernel we also study the corresponding mapping properties and derive explicit solutions together with certain regularity results. (orig.)

  9. Microanalysis study of archaeological mural samples containing Maya blue pigment

    International Nuclear Information System (INIS)

    Sanchez del Rio, M.; Martinetto, P.; Somogyi, A.; Reyes-Valerio, C.; Dooryhee, E.; Peltier, N.; Alianelli, L.; Moignard, B.; Pichon, L.; Calligaro, T.; Dran, J.-C.

    2004-01-01

    Elemental analysis by X-ray fluorescence and particle induced X-ray emission is applied to the study of several Mesoamerican mural samples containing blue pigments. The most characteristic blue pigment is Maya blue, a very stable organo-clay complex original from Maya culture and widely used in murals, pottery and sculptures in a vast region of Mesoamerica during the pre-hispanic time (from VIII century) and during the colonization until 1580. The mural samples come from six different archaeological sites (four pre-hispanic and two from XVI century colonial convents). The correlation between the presence of some elements and the pigment colour is discussed. From the comparative study of the elemental concentration, some conclusions are drawn on the nature of the pigments and the technology used

  10. Microanalysis study of archaeological mural samples containing Maya blue pigment

    Science.gov (United States)

    Sánchez del Río, M.; Martinetto, P.; Somogyi, A.; Reyes-Valerio, C.; Dooryhée, E.; Peltier, N.; Alianelli, L.; Moignard, B.; Pichon, L.; Calligaro, T.; Dran, J.-C.

    2004-10-01

    Elemental analysis by X-ray fluorescence and particle induced X-ray emission is applied to the study of several Mesoamerican mural samples containing blue pigments. The most characteristic blue pigment is Maya blue, a very stable organo-clay complex original from Maya culture and widely used in murals, pottery and sculptures in a vast region of Mesoamerica during the pre-hispanic time (from VIII century) and during the colonization until 1580. The mural samples come from six different archaeological sites (four pre-hispanic and two from XVI century colonial convents). The correlation between the presence of some elements and the pigment colour is discussed. From the comparative study of the elemental concentration, some conclusions are drawn on the nature of the pigments and the technology used.

  11. Microanalysis study of archaeological mural samples containing Maya blue pigment

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez del Rio, M. [ESRF, BP220, F-38043 Grenoble (France)]. E-mail: srio@esrf.fr; Martinetto, P. [Laboratoire de Cristallographie, CNRS, BP166 F-30842 Grenoble (France); Somogyi, A. [ESRF, BP220, F-38043 Grenoble (France); Reyes-Valerio, C. [INAH, Mexico DF (Mexico); Dooryhee, E. [Laboratoire de Cristallographie, CNRS, BP166 F-30842 Grenoble (France); Peltier, N. [Laboratoire de Cristallographie, CNRS, BP166 F-30842 Grenoble (France); Alianelli, L. [INFM-OGG c/o ESRF, BP220, F-38043 Grenoble Cedex (France); Moignard, B. [C2RMF, 6 Rue des Pyramides, F-75041 Paris Cedex 01 (France); Pichon, L. [C2RMF, 6 Rue des Pyramides, F-75041 Paris Cedex 01 (France); Calligaro, T. [C2RMF, 6 Rue des Pyramides, F-75041 Paris Cedex 01 (France); Dran, J.-C. [C2RMF, 6 Rue des Pyramides, F-75041 Paris Cedex 01 (France)

    2004-10-08

    Elemental analysis by X-ray fluorescence and particle induced X-ray emission is applied to the study of several Mesoamerican mural samples containing blue pigments. The most characteristic blue pigment is Maya blue, a very stable organo-clay complex original from Maya culture and widely used in murals, pottery and sculptures in a vast region of Mesoamerica during the pre-hispanic time (from VIII century) and during the colonization until 1580. The mural samples come from six different archaeological sites (four pre-hispanic and two from XVI century colonial convents). The correlation between the presence of some elements and the pigment colour is discussed. From the comparative study of the elemental concentration, some conclusions are drawn on the nature of the pigments and the technology used.

  12. Assessment of Residual Stresses in 3013 Inner and Outer Containers and Teardrop Samples

    Energy Technology Data Exchange (ETDEWEB)

    Stroud, Mary Ann [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Prime, Michael Bruce [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Veirs, Douglas Kirk [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Berg, John M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Clausen, Bjorn [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Worl, Laura Ann [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); DeWald, Adrian T. [Hill Engineering, LLC, Rancho Cordova, CA (United States)

    2015-12-08

    This report is an assessment performed by LANL that examines packaging for plutonium-bearing materials and the resilience of its design. This report discusses residual stresses in the 3013 outer, the SRS/Hanford and RFETS/LLNL inner containers, and teardrop samples used in studies to assess the potential for SCC in 3013 containers. Residual tensile stresses in the heat affected zones of the closure welds are of particular concern.

  13. Assessment of Residual Stresses in 3013 Inner and Outer Containers and Teardrop Samples

    International Nuclear Information System (INIS)

    Stroud, Mary Ann; Prime, Michael Bruce; Veirs, Douglas Kirk; Berg, John M.; Clausen, Bjorn; Worl, Laura Ann; DeWald, Adrian T.

    2015-01-01

    This report is an assessment performed by LANL that examines packaging for plutonium-bearing materials and the resilience of its design. This report discusses residual stresses in the 3013 outer, the SRS/Hanford and RFETS/LLNL inner containers, and teardrop samples used in studies to assess the potential for SCC in 3013 containers. Residual tensile stresses in the heat affected zones of the closure welds are of particular concern.

  14. Self-similar continued root approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.

    2012-01-01

    A novel method of summing asymptotic series is advanced. Such series repeatedly arise when employing perturbation theory in powers of a small parameter for complicated problems of condensed matter physics, statistical physics, and various applied problems. The method is based on the self-similar approximation theory involving self-similar root approximants. The constructed self-similar continued roots extrapolate asymptotic series to finite values of the expansion parameter. The self-similar continued roots contain, as a particular case, continued fractions and Padé approximants. A theorem on the convergence of the self-similar continued roots is proved. The method is illustrated by several examples from condensed-matter physics.

  15. Quality control for measurement of soil samples containing 237Np and 241Am as radiotracer

    International Nuclear Information System (INIS)

    Sha Lianmao; Zhang Caihong; Song Hailong; Ren Xiaona; Han Yuhu; Zhang Aiming; Chu Taiwei

    2003-01-01

    This paper reports quality control (QC) for the measurement of soil samples containing 237 Np and 241 Am as radiotracers in migration test of transuranic nuclides. All of the QC were done independently by the QA members of analytical work. It mainly included checking 5%-10% of the total analyzed samples; preparing blank samples, blind replicate sample and spiked samples used as quality control samples to check the quality of analytical work

  16. Framework for sequential approximate optimization

    NARCIS (Netherlands)

    Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

    2004-01-01

    An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

  17. The effect of shade on the container index and pupal productivity of the mosquitoes Aedes aegypti and Culex pipiens breeding in artificial containers.

    Science.gov (United States)

    Vezzani, D; Albicócco, A P

    2009-03-01

    The aim of this study was to assess whether certain attributes of larval breeding sites are correlated with pupal productivity (i.e. numbers of pupae collected per sampling period), so that these could be used as the focus for control measures to enhance control efficiency. Therefore, the objectives were to identify the months of highest pupal productivity of Aedes aegypti (L.) and Culex pipiens L. (Diptera: Culicidae) in an urban temperate cemetery in Argentina where artificial containers of containers and to determine whether the composition of the containers affected pupal productivity. Over a period of 9 months, 200 randomly chosen water-filled containers (100 sunlit and 100 shaded), out of approximately 3738 containers present (approximately 54% in shade), were examined each month within a cemetery (5 ha) in Buenos Aires (October 2006 to June 2007). In total, 3440 immatures of Cx pipiens and 1974 of Ae. aegypti were collected. The larvae : pupae ratio was 10 times greater for the former, indicating that larval mortality was greater for Cx pipiens. Both mosquito species showed a higher container index (CI) in shaded than in sunlit containers (Ae. aegypti: 12.8% vs. 6.9% [chi(2) = 17.6, P container and the number of pupae per pupa-positive container did not differ significantly between sunlit and shaded containers for either species. Therefore, the overall relative productivity of pupae per ha of Ae. aegypti and Cx pipiens was 2.3 and 1.8 times greater, respectively, in shaded than in sunlit areas as a result of the greater CIs of containers in shaded areas. Neither the CI nor the number of immatures per infested container differed significantly among container types of different materials in either lighting condition. The maximum CI and total pupal counts occurred in March for Ae. aegypti and in January and February for Cx pipiens. The estimated peak abundance of pupae in the whole cemetery reached a total of approximately 4388 in the middle of March for Ae

  18. Processes to Open the Container and the Sample Catcher of the Hayabusa Returned Capsule in the Planetary Material Sample Curation Facility of JAXA

    Science.gov (United States)

    Fujimura, A.; Abe, M.; Yada, T.; Nakamura, T.; Noguchi, T.; Okazaki, R.; Ishibashi, Y.; Shirai, K.; Okada, T.; Yano, H.; hide

    2011-01-01

    Japanese spacecraft Hayabusa, which returned from near-Earth-asteroid Itokawa, successfully returned its reentry capsule to the Earth, the Woomera Prohibited Area in Australia in Jun 13th, 2010, as detailed in another paper [1]. The capsule introduced into the Planetary Material Sample Curation Facility in the Sagamihara campus of JAXA in the early morning of June 18th. Hereafter, we describe a series of processes for the returned capsule and the container to recover gas and materials in there. A transportation box of the recovered capsule was cleaned up on its outer surface beforehand and introduced into the class 10,000 clean room of the facility. Then, the capsule was extracted from the box and its plastic bag was opened and checked and photographed the outer surface of the capsule. The capsule was composed of the container, a backside ablator, a side ablator, an electronic box and a supporting frame. The container consists of an outer lid, an inner lid, a frame for latches, a container and a sample catcher, which is composed of room A and B and a rotational cylinder. After the first check, the capsule was packed in a plastic bag with N2 again, and transferred to the Chofu campus in JAXA, where the X-ray CT instrument is situated. The first X-ray CT analysis was performed on the whole returned capsule for confirming the conditions of latches and O-ring seal of the container. The analysis showed that the latches of the container should have worked normally, and that the double Orings of the container seemed to be sealed its sample catcher with no problem. After the first X-ray CT, the capsule was sent back to Sagamihara and introduced in the clean room to exclude the electronic box and the side ablator from the container by hand tools. Then the container with the backside ablator was set firmly to special jigs to fix the lid of container tightly to the container and set to a milling machine. The backside ablator was drilled by the machine to expose heads of bolts

  19. Weighted approximation with varying weight

    CERN Document Server

    Totik, Vilmos

    1994-01-01

    A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.

  20. Sampling Polya-Gamma random variates: alternate and approximate techniques

    OpenAIRE

    Windle, Jesse; Polson, Nicholas G.; Scott, James G.

    2014-01-01

    Efficiently sampling from the P\\'olya-Gamma distribution, ${PG}(b,z)$, is an essential element of P\\'olya-Gamma data augmentation. Polson et. al (2013) show how to efficiently sample from the ${PG}(1,z)$ distribution. We build two new samplers that offer improved performance when sampling from the ${PG}(b,z)$ distribution and $b$ is not unity.

  1. Calorimetric assay of HTGR fuel samples

    International Nuclear Information System (INIS)

    Allen, E.J.; McNeany, S.R.; Jenkins, J.D.

    1979-04-01

    A calorimeter using a neutron source was designed and fabricated by Mound Laboratory, according to ORNL specifications. A calibration curve of the device for HTGR standard fuel rods was experimentally determined. The precision of a single measurement at the 95% confidence level was estimated to be +-0.8 μW. For a fuel sample containing 0.3 g 235 U and a neutron source containing 691 μg 252 Cf, this represents a relative standard deviation of 0.5%. Measurement time was approximately 5.5 h per sample. Use of the calorimeter is limited by its relatively poor precision, long measurement time, manual sample changing, sensitivity to room environment, and possibility of accumulated dust blocking water flow through the calorimeter. The calorimeter could be redesigned to resolve most of these difficulties, but not without significant development work

  2. Cyclic approximation to stasis

    Directory of Open Access Journals (Sweden)

    Stewart D. Johnson

    2009-06-01

    Full Text Available Neighborhoods of points in $mathbb{R}^n$ where a positive linear combination of $C^1$ vector fields sum to zero contain, generically, cyclic trajectories that switch between the vector fields. Such points are called stasis points, and the approximating switching cycle can be chosen so that the timing of the switches exactly matches the positive linear weighting. In the case of two vector fields, the stasis points form one-dimensional $C^1$ manifolds containing nearby families of two-cycles. The generic case of two flows in $mathbb{R}^3$ can be diffeomorphed to a standard form with cubic curves as trajectories.

  3. Mathematical analysis, approximation theory and their applications

    CERN Document Server

    Gupta, Vijay

    2016-01-01

    Designed for graduate students, researchers, and engineers in mathematics, optimization, and economics, this self-contained volume presents theory, methods, and applications in mathematical analysis and approximation theory. Specific topics include: approximation of functions by linear positive operators with applications to computer aided geometric design, numerical analysis, optimization theory, and solutions of differential equations. Recent and significant developments in approximation theory, special functions and q-calculus along with their applications to mathematics, engineering, and social sciences are discussed and analyzed. Each chapter enriches the understanding of current research problems and theories in pure and applied research.

  4. Subquadratic medial-axis approximation in $\\mathbb{R}^3$

    Directory of Open Access Journals (Sweden)

    Christian Scheffer

    2015-09-01

    Full Text Available We present an algorithm that approximates the medial axis of a smooth manifold in $\\mathbb{R}^3$ which is given by a sufficiently dense point sample. The resulting, non-discrete approximation is shown to converge to the medial axis as the sampling density approaches infinity. While all previous algorithms guaranteeing convergence have a running time quadratic in the size $n$ of the point sample, we achieve a running time of at most $\\mathcal{O}(n\\log^3 n$. While there is no subquadratic upper bound on the output complexity of previous algorithms for non-discrete medial axis approximation, the output of our algorithm is guaranteed to be of linear size.

  5. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  6. Hanford Site Environmental Surveillance Master Sampling Schedule for Calendar Year 2011

    Energy Technology Data Exchange (ETDEWEB)

    Bisping, Lynn E.

    2011-01-21

    This document contains the calendar year 2011 schedule for the routine collection of samples for the Surface Environmental Surveillance Project and the Drinking Water Monitoring Project. Each section includes sampling locations, sampling frequencies, sample types, and analyses to be performed. In some cases, samples are scheduled on a rotating basis. If a sample will not be collected in 2011, the anticipated year for collection is provided. Maps showing approximate sampling locations are included for media scheduled for collection in 2011.

  7. Reliability analysis of steel-containment strength

    International Nuclear Information System (INIS)

    Greimann, L.G.; Fanous, F.; Wold-Tinsae, A.; Ketalaar, D.; Lin, T.; Bluhm, D.

    1982-06-01

    A best estimate and uncertainty assessment of the resistance of the St. Lucie, Cherokee, Perry, WPPSS and Browns Ferry containment vessels was performed. The Monte Carlo simulation technique and second moment approach were compared as a means of calculating the probability distribution of the containment resistance. A uniform static internal pressure was used and strain ductility was taken as the failure criterion. Approximate methods were developed and calibrated with finite element analysis. Both approximate and finite element analyses were performed on the axisymmetric containment structure. An uncertainty assessment of the containment strength was then performed by the second moment reliability method. Based upon the approximate methods, the cumulative distribution for the resistance of each of the five containments (shell modes only) is presented

  8. Hanford Site Environmental Surveillance Master Sampling Schedule for Calendar Year 2007

    Energy Technology Data Exchange (ETDEWEB)

    Bisping, Lynn E.

    2007-01-31

    This document contains the calendar year 2007 schedule for the routine collection of samples for the Surface Environmental Surveillance Project and Drinking Water Monitoring Project. Each section includes sampling locations, sampling frequencies, sample types, and analyses to be performed. In some cases, samples are scheduled on a rotating basis and may not be collected in 2007 in which case the anticipated year for collection is provided. Maps showing approximate sampling locations are included for media scheduled for collection in 2007.

  9. Comparison of four support-vector based function approximators

    NARCIS (Netherlands)

    de Kruif, B.J.; de Vries, Theodorus J.A.

    2004-01-01

    One of the uses of the support vector machine (SVM), as introduced in V.N. Vapnik (2000), is as a function approximator. The SVM and approximators based on it, approximate a relation in data by applying interpolation between so-called support vectors, being a limited number of samples that have been

  10. Assessment of airborne asbestos exposure during the servicing and handling of automobile asbestos-containing gaskets.

    Science.gov (United States)

    Blake, Charles L; Dotson, G Scott; Harbison, Raymond D

    2006-07-01

    Five test sessions were conducted to assess asbestos exposure during the removal or installation of asbestos-containing gaskets on vehicles. All testing took place within an operative automotive repair facility involving passenger cars and a pickup truck ranging in vintage from late 1960s through 1970s. A professional mechanic performed all shop work including engine disassembly and reassembly, gasket manipulation and parts cleaning. Bulk sample analysis of removed gaskets through polarized light microscopy (PLM) revealed asbestos fiber concentrations ranging between 0 and 75%. Personal and area air samples were collected and analyzed using National Institute of Occupational Safety Health (NIOSH) methods 7400 [phase contrast microscopy (PCM)] and 7402 [transmission electron microscopy (TEM)]. Among all air samples collected, approximately 21% (n = 11) contained chrysotile fibers. The mean PCM and phase contrast microscopy equivalent (PCME) 8-h time weighted average (TWA) concentrations for these samples were 0.0031 fibers/cubic centimeters (f/cc) and 0.0017 f/cc, respectively. Based on these findings, automobile mechanics who worked with asbestos-containing gaskets may have been exposed to concentrations of airborne asbestos concentrations approximately 100 times lower than the current Occupational Safety and Health Administration (OSHA) Permissible Exposure Limit (PEL) of 0.1 f/cc.

  11. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    Science.gov (United States)

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  13. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    . The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...

  14. Improved Procedure for Transport of Dental Plaque Samples and Other Clinical Specimens Containing Anaerobic Bacteria

    Science.gov (United States)

    Spiegel, Carol A.; Minah, Glenn E.; Krywolap, George N.

    1979-01-01

    An improved transport system for samples containing anaerobic bacteria was developed. This system increased the recovery rate of anaerobic bacteria up to 28.8% as compared to a commonly used method. PMID:39087

  15. A Study on the Model of Detecting the Liquid Level of Sealed Containers Based on Kirchhoff Approximation Theory.

    Science.gov (United States)

    Zhang, Bin; Song, Wen-Ai; Wei, Yue-Juan; Zhang, Dong-Song; Liu, Wen-Yi

    2017-06-15

    By simulating the sound field of a round piston transducer with the Kirchhoff integral theorem and analyzing the shape of ultrasound beams and propagation characteristics in a metal container wall, this study presents a model for calculating the echo sound pressure by using the Kirchhoff paraxial approximation theory, based on which and according to different ultrasonic impedance between gas and liquid media, a method for detecting the liquid level from outside of sealed containers is proposed. Then, the proposed method is evaluated through two groups of experiments. In the first group, three kinds of liquid media with different ultrasonic impedance are used as detected objects; the echo sound pressure is calculated by using the proposed model under conditions of four sets of different wall thicknesses. The changing characteristics of the echo sound pressure in the entire detection process are analyzed, and the effects of different ultrasonic impedance of liquids on the echo sound pressure are compared. In the second group, taking water as an example, two transducers with different radii are selected to measure the liquid level under four sets of wall thickness. Combining with sound field characteristics, the influence of different size transducers on the pressure calculation and detection resolution are discussed and analyzed. Finally, the experimental results indicate that measurement uncertainly is better than ±5 mm, which meets the industrial inspection requirements.

  16. A Study on the Model of Detecting the Liquid Level of Sealed Containers Based on Kirchhoff Approximation Theory

    Directory of Open Access Journals (Sweden)

    Bin Zhang

    2017-06-01

    Full Text Available By simulating the sound field of a round piston transducer with the Kirchhoff integral theorem and analyzing the shape of ultrasound beams and propagation characteristics in a metal container wall, this study presents a model for calculating the echo sound pressure by using the Kirchhoff paraxial approximation theory, based on which and according to different ultrasonic impedance between gas and liquid media, a method for detecting the liquid level from outside of sealed containers is proposed. Then, the proposed method is evaluated through two groups of experiments. In the first group, three kinds of liquid media with different ultrasonic impedance are used as detected objects; the echo sound pressure is calculated by using the proposed model under conditions of four sets of different wall thicknesses. The changing characteristics of the echo sound pressure in the entire detection process are analyzed, and the effects of different ultrasonic impedance of liquids on the echo sound pressure are compared. In the second group, taking water as an example, two transducers with different radii are selected to measure the liquid level under four sets of wall thickness. Combining with sound field characteristics, the influence of different size transducers on the pressure calculation and detection resolution are discussed and analyzed. Finally, the experimental results indicate that measurement uncertainly is better than ±5 mm, which meets the industrial inspection requirements.

  17. Approximate reasoning in decision analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, M M; Sanchez, E

    1982-01-01

    The volume aims to incorporate the recent advances in both theory and applications. It contains 44 articles by 74 contributors from 17 different countries. The topics considered include: membership functions; composite fuzzy relations; fuzzy logic and inference; classifications and similarity measures; expert systems and medical diagnosis; psychological measurements and human behaviour; approximate reasoning and decision analysis; and fuzzy clustering algorithms.

  18. Elastic anisotropy of core samples from the Taiwan Chelungpu Fault Drilling Project (TCDP): direct 3-D measurements and weak anisotropy approximations

    Science.gov (United States)

    Louis, Laurent; David, Christian; Špaček, Petr; Wong, Teng-Fong; Fortin, Jérôme; Song, Sheng Rong

    2012-01-01

    The study of seismic anisotropy has become a powerful tool to decipher rock physics attributes in reservoirs or in complex tectonic settings. We compare direct 3-D measurements of P-wave velocity in 132 different directions on spherical rock samples to the prediction of the approximate model proposed by Louis et al. based on a tensorial approach. The data set includes measurements on dry spheres under confining pressure ranging from 5 to 200 MPa for three sandstones retrieved at a depth of 850, 1365 and 1394 metres in TCDP hole A (Taiwan Chelungpu Fault Drilling Project). As long as the P-wave velocity anisotropy is weak, we show that the predictions of the approximate model are in good agreement with the measurements. As the tensorial method is designed to work with cylindrical samples cored in three orthogonal directions, a significant gain both in the number of measurements involved and in sample preparation is achieved compared to measurements on spheres. We analysed the pressure dependence of the velocity field and show that as the confining pressure is raised the velocity increases, the anisotropy decreases but remains significant even at high pressure, and the shape of the ellipsoid representing the velocity (or elastic) fabric evolves from elongated to planar. These observations can be accounted for by considering the existence of both isotropic and anisotropic crack distributions and their evolution with applied pressure.

  19. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  20. Diophantine approximation and Dirichlet series

    CERN Document Server

    Queffélec, Hervé

    2013-01-01

    This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...

  1. Smooth function approximation using neural networks.

    Science.gov (United States)

    Ferrari, Silvia; Stengel, Robert F

    2005-01-01

    An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

  2. A study of western pharmaceuticals contained within samples of Chinese herbal/patent medicines collected from New York City's Chinatown.

    Science.gov (United States)

    Miller, Gretchen M; Stripp, Richard

    2007-09-01

    In America, recent growth in the popularity of Chinese herbal/patent medicines (CHM/CPM) has generated concerns as to the safety of these and other herbal remedies. Lack of strict federal regulations has lead to the possibility of improper labeling and even adulteration of these products with western drugs or other chemical contaminants. Our laboratory has conducted an analytical study to determine the presence of undeclared pharmaceuticals and therapeutic substances within CHM/CPM sold in New York City's Chinatown. Ninety representative samples randomly purchased in the form of pills, tablets, creams and teas were screened by appropriate analytical techniques including TLC, GC/MS and HPLC. Five samples contained nine different western pharmaceuticals. Two of these samples contained undeclared or mislabeled substances. One sample contained two pharmaceuticals contraindicated in people for whom the product was intended. Drugs identified include promethazine, chlormethiazole, chlorpheniramine, diclofenac, chlordiazepoxide, hydrochlorothiazide, triamterene, diphenhydramine and sildenafil citrate (Viagra).

  3. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  4. MUSIC ALGORITHM FOR LOCATING POINT-LIKE SCATTERERS CONTAINED IN A SAMPLE ON FLAT SUBSTRATE

    Institute of Scientific and Technical Information of China (English)

    Dong Heping; Ma Fuming; Zhang Deyue

    2012-01-01

    In this paper,we consider a MUSIC algorithm for locating point-like scatterers contained in a sample on flat substrate.Based on an asymptotic expansion of the scattering amplitude proposed by Ammari et al.,the reconstruction problem can be reduced to a calculation of Green function corresponding to the background medium.In addition,we use an explicit formulation of Green function in the MUSIC algorithm to simplify the calculation when the cross-section of sample is a half-disc.Numerical experiments are included to demonstrate the feasibility of this method.

  5. PlanetVac: Sample Return with a Puff of Gas

    Science.gov (United States)

    Zacny, K.; Mueller, R.; Betts, B. H.

    2014-12-01

    PlanetVac is a regolith sample acquisition mission concept that uses compressed gas to blow material from the surface up a pneumatic tube and directly into a sample return container. The PlanetVac sampling device is built into the lander legs to eliminate cost and complexity associated with robotic arms and scoops. The pneumatic system can effectively capture fine and coarse regolith, including small pebbles. It is well suited for landed missions to Mars, asteroids, or the Moon. Because of the low pressures on all those bodies, the technique is extremely efficient. If losses are kept to minimum, 1 gram of compressed gas could efficiently lift 6000 grams of soil. To demonstrate this approach, the PlanetVac lander with four legs and two sampling tubes has been designed, integrated, and tested. Vacuum chamber testing was performed using two well-known planetary regolith simulants: Mars Mojave Simulant (MMS) and lunar regolith simulant JSC-1A. One of the two sampling systems was connected to a mockup of an earth return rocket while the second sampling system was connected to a lander deck mounted instrument (clear box for easy viewing). The tests included a drop from a height of approximately 50 cm onto the bed of regolith, deployment of sampling tubes into the regolith, pneumatic acquisition of sample into an instrument (sample container) and the rocket, and the launch of the rocket. The demonstration has been successful and can be viewed here: https://www.youtube.com/watch?v=DjJXvtQk6no. In most of the tests, 20 grams or more of sample was delivered to the 'instrument' and approximately 5 grams of regolith was delivered into a sampling chamber within the rocket. The gas lifting efficiency was calculated to be approximately 1000:1; that is 1 gram of gas lofted 1000 grams of regolith. Efficiencies in lower gravity environments are expected to be much higher. This successful, simple and lightweight sample capture demonstration paves the way to using such sampling system

  6. Tank farms backlog soil sample and analysis results supporting a contained-in determination

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, C.L., Fluor Daniel Hanford

    1997-02-27

    Soil waste is generated from Tank Farms and associated Tank Farms facilities operations. The soil is a mixed waste because it is an environmental media which contains tank waste, a listed mixed waste. The soil is designated with the listed waste codes (FOO1 through F005) which have been applied to all tank wastes. The scope of this report includes Tank Farms soil managed under the Backlog program. The Backlog Tank Farm soil in storage consists of drums and 5 boxes (originally 828 drums). The Backlog Waste Program dealt with 2276 containers of solid waste generated by Tank Farms operations during the time period from 1989 through early 1993. The containers were mismanaged by being left in the field for an extended period of time without being placed into permitted storage. As a corrective action for this situation, these containers were placed in interim storage at the Central Waste Complex (CWC) pending additional characterization. The Backlog Waste Analysis Plan (BWAP) (RL 1993) was written to define how Backlog wastes would be evaluated for proper designation and storage. The BWAP was approved in August 1993 and all work required by the BWAP was completed by July 1994. This document presents results of testing performed in 1992 & 1996 that supports the attainment of a Contained-In Determination for Tank Farm Backlog soils. The analytical data contained in this report is evaluated against a prescribed decision rule. If the decision rule is satisfied then the Washington State Department of ecology (Ecology) may grant a Contained-In Determination. A Contained-In Determination for disposal to an unlined burial trench will be requested from Ecology . The decision rule and testing requirements provided by Ecology are described in the Tank Farms Backlog Soil Sample Analysis Plan (SAP) (WHC 1996).

  7. Analysis of the 2H-evaporator scale samples (HTF-17-56, -57)

    Energy Technology Data Exchange (ETDEWEB)

    Hay, M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Coleman, C. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Diprete, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-13

    Savannah River National Laboratory analyzed scale samples from both the wall and cone sections of the 242-16H Evaporator prior to chemical cleaning. The samples were analyzed for uranium and plutonium isotopes required for a Nuclear Criticality Safety Assessment of the scale removal process. The analysis of the scale samples found the material to contain crystalline nitrated cancrinite and clarkeite. Samples from both the wall and cone contain depleted uranium. Uranium concentrations of 16.8 wt% 4.76 wt% were measured in the wall and cone samples, respectively. The ratio of plutonium isotopes in both samples is ~85% Pu-239 and ~15% Pu-238 by mass and shows approximately the same 3.5 times higher concentration in the wall sample versus the cone sample as observed in the uranium concentrations. The mercury concentrations measured in the scale samples were higher than previously reported values. The wall sample contains 19.4 wt% mercury and the cone scale sample 11.4 wt% mercury. The results from the current scales samples show reasonable agreement with previous 242-16H Evaporator scale sample analysis; however, the uranium concentration in the current wall sample is substantially higher than previous measurements.

  8. Hydrogen: Beyond the Classic Approximation

    International Nuclear Information System (INIS)

    Scivetti, Ivan

    2003-01-01

    The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position

  9. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  10. Rational approximation of vertical segments

    Science.gov (United States)

    Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte

    2007-08-01

    In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.

  11. Pade approximant calculations for neutron escape probability

    International Nuclear Information System (INIS)

    El Wakil, S.A.; Saad, E.A.; Hendi, A.A.

    1984-07-01

    The neutron escape probability from a non-multiplying slab containing internal source is defined in terms of a functional relation for the scattering function for the diffuse reflection problem. The Pade approximant technique is used to get numerical results which compare with exact results. (author)

  12. Approximate reasoning in physical systems

    International Nuclear Information System (INIS)

    Mutihac, R.

    1991-01-01

    The theory of fuzzy sets provides excellent ground to deal with fuzzy observations (uncertain or imprecise signals, wavelengths, temperatures,etc.) fuzzy functions (spectra and depth profiles) and fuzzy logic and approximate reasoning. First, the basic ideas of fuzzy set theory are briefly presented. Secondly, stress is put on application of simple fuzzy set operations for matching candidate reference spectra of a spectral library to an unknown sample spectrum (e.g. IR spectroscopy). Thirdly, approximate reasoning is applied to infer an unknown property from information available in a database (e.g. crystal systems). Finally, multi-dimensional fuzzy reasoning techniques are suggested. (Author)

  13. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    Science.gov (United States)

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright

  14. Analysis of Dust Samples Collected from an Unused Spent Nuclear Fuel Interim Storage Container at Hope Creek, Delaware.

    Energy Technology Data Exchange (ETDEWEB)

    Bryan, Charles R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Enos, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-03-01

    In July, 2014, the Electric Power Research Institute and industry partners sampled dust on the surface of an unused canister that had been stored in an overpack at the Hope Creek Nuclear Generating Station for approximately one year. The foreign material exclusion (FME) cover that had been on the top of the canister during storage, and a second recently - removed FME cover, were also sampled. This report summarizes the results of analyses of dust samples collected from the unused Hope Creek canister and the FME covers. Both wet and dry samples of the dust/salts were collected, using SaltSmart(TM) sensors and Scotch - Brite(TM) abrasive pads, respectively. The SaltSmart(TM) samples were leached and the leachate analyzed chemically to determine the composition and surface load per unit area of soluble salts present on the canister surface. The dry pad samples were analyzed by X-ray fluorescence and by scanning electron microscopy to determine dust texture and mineralogy; and by leaching and chemical analysis to deter mine soluble salt compositions. The analyses showed that the dominant particles on the canister surface were stainless steel particles, generated during manufacturing of the canister. Sparse environmentally - derived silicates and aluminosilicates were also present. Salt phases were sparse, and consisted of mostly of sulfates with rare nitrates and chlorides. On the FME covers, the dusts were mostly silicates/aluminosilicates; the soluble salts were consistent with those on the canister surface, and were dominantly sulfates. It should be noted that the FME covers were w ashed by rain prior to sampling, which had an unknown effect of the measured salt loads and compositions. Sulfate salts dominated the assemblages on the canister and FME surfaces, and in cluded Ca - SO4 , but also Na - SO4 , K - SO4 , and Na - Al - SO4 . It is likely that these salts were formed by particle - gas conversion reactions, either

  15. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef; Nobile, Fabio; Tempone, Raul; Wolfers, Sö ren

    2017-01-01

    , obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose

  16. Approximate representations of propagators in an external field

    International Nuclear Information System (INIS)

    Fried, H.M.

    1979-01-01

    A method of forming approximate representations for propagators with external field dependence is suggested and discussed in the context of potential scattering. An integro-differential equation in D+1 variables, where D represents the dimensionality of Euclidian space-time, is replaced by a Volterra equation in one variable. Approximate solutions to the latter provide a generalization of the Bloch-Nordsieck representation, containing the effects of all powers of hard-potential interactions, each modified by a characteristic soft-potential dependence [fr

  17. Rational approximations for tomographic reconstructions

    International Nuclear Information System (INIS)

    Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas

    2013-01-01

    We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp–Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image. (paper)

  18. On Nash-Equilibria of Approximation-Stable Games

    Science.gov (United States)

    Awasthi, Pranjal; Balcan, Maria-Florina; Blum, Avrim; Sheffet, Or; Vempala, Santosh

    One reason for wanting to compute an (approximate) Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or ɛ-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. Motivated by this consideration, in this paper we define the notion of games that are approximation stable, meaning that all ɛ-approximate equilibria are contained inside a small ball of radius Δ around a true equilibrium, and investigate a number of their properties. Many natural small games such as matching pennies and rock-paper-scissors are indeed approximation stable. We show furthermore there exist 2-player n-by-n approximation-stable games in which the Nash equilibrium and all approximate equilibria have support Ω(log n). On the other hand, we show all (ɛ,Δ) approximation-stable games must have an ɛ-equilibrium of support O(Δ^{2-o(1)}/ɛ2{log n}), yielding an immediate n^{O(Δ^{2-o(1)}/ɛ^2log n)}-time algorithm, improving over the bound of [11] for games satisfying this condition. We in addition give a polynomial-time algorithm for the case that Δ and ɛ are sufficiently close together. We also consider an inverse property, namely that all non-approximate equilibria are far from some true equilibrium, and give an efficient algorithm for games satisfying that condition.

  19. Space-angle approximations in the variational nodal method

    International Nuclear Information System (INIS)

    Lewis, E. E.; Palmiotti, G.; Taiwo, T.

    1999-01-01

    The variational nodal method is formulated such that the angular and spatial approximations maybe examined separately. Spherical harmonic, simplified spherical harmonic, and discrete ordinate approximations are coupled to the primal hybrid finite element treatment of the spatial variables. Within this framework, two classes of spatial trial functions are presented: (1) orthogonal polynomials for the treatment of homogeneous nodes and (2) bilinear finite subelement trial functions for the treatment of fuel assembly sized nodes in which fuel-pin cell cross sections are represented explicitly. Polynomial and subelement trial functions are applied to benchmark water-reactor problems containing MOX fuel using spherical harmonic and simplified spherical harmonic approximations. The resulting accuracy and computing costs are compared

  20. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    Science.gov (United States)

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  1. Extended random-phase approximation with three-body ground-state correlations

    International Nuclear Information System (INIS)

    Tohyama, M.; Schuck, P.

    2008-01-01

    An extended random-phase approximation (ERPA) which contains the effects of ground-state correlations up to a three-body level is applied to an extended Lipkin model which contains an additional particle-scattering term. Three-body correlations in the ground state are necessary to preserve the hermiticity of the Hamiltonian matrix of ERPA. Two approximate forms of ERPA which neglect the three-body correlations are also applied to investigate the importance of three-body correlations. It is found that the ground-state energy is little affected by the inclusion of the three-body correlations. On the contrary, three-body correlations for the excited states can become quite important. (orig.)

  2. Beyond the random phase approximation

    DEFF Research Database (Denmark)

    Olsen, Thomas; Thygesen, Kristian S.

    2013-01-01

    We assess the performance of a recently proposed renormalized adiabatic local density approximation (rALDA) for ab initio calculations of electronic correlation energies in solids and molecules. The method is an extension of the random phase approximation (RPA) derived from time-dependent density...... functional theory and the adiabatic connection fluctuation-dissipation theorem and contains no fitted parameters. The new kernel is shown to preserve the accurate description of dispersive interactions from RPA while significantly improving the description of short-range correlation in molecules, insulators......, and metals. For molecular atomization energies, the rALDA is a factor of 7 better than RPA and a factor of 4 better than the Perdew-Burke-Ernzerhof (PBE) functional when compared to experiments, and a factor of 3 (1.5) better than RPA (PBE) for cohesive energies of solids. For transition metals...

  3. System for sampling active solutions in transport container; Systeme de prelevements de solutions actives sur les recipients de transport

    Energy Technology Data Exchange (ETDEWEB)

    Fradin, J.

    1958-12-03

    This report presents a system aimed at sampling active solution from a specific transport container (SCRGR model) while transferring this solution with a maximum safety. The sampling principle is described (a flexible tube connected to the receiving container, with a needle at the other end which goes through a rubber membrane and enters a plunger tube). Its benefits are outlined (operator protection, reduction of contamination risk; only the rubber membrane is removed and replaced). Some manufacturing details are described concerning the membrane and the cover.

  4. Transuranic waste characterization sampling and analysis plan

    International Nuclear Information System (INIS)

    1994-01-01

    Los Alamos National Laboratory (the Laboratory) is located approximately 25 miles northwest of Santa Fe, New Mexico, situated on the Pajarito Plateau. Technical Area 54 (TA-54), one of the Laboratory's many technical areas, is a radioactive and hazardous waste management and disposal area located within the Laboratory's boundaries. The purpose of this transuranic waste characterization, sampling, and analysis plan (CSAP) is to provide a methodology for identifying, characterizing, and sampling approximately 25,000 containers of transuranic waste stored at Pads 1, 2, and 4, Dome 48, and the Fiberglass Reinforced Plywood Box Dome at TA-54, Area G, of the Laboratory. Transuranic waste currently stored at Area G was generated primarily from research and development activities, processing and recovery operations, and decontamination and decommissioning projects. This document was created to facilitate compliance with several regulatory requirements and program drivers that are relevant to waste management at the Laboratory, including concerns of the New Mexico Environment Department

  5. Parallel magnetic resonance imaging as approximation in a reproducing kernel Hilbert space

    International Nuclear Information System (INIS)

    Athalye, Vivek; Lustig, Michael; Martin Uecker

    2015-01-01

    In magnetic resonance imaging data samples are collected in the spatial frequency domain (k-space), typically by time-consuming line-by-line scanning on a Cartesian grid. Scans can be accelerated by simultaneous acquisition of data using multiple receivers (parallel imaging), and by using more efficient non-Cartesian sampling schemes. To understand and design k-space sampling patterns, a theoretical framework is needed to analyze how well arbitrary sampling patterns reconstruct unsampled k-space using receive coil information. As shown here, reconstruction from samples at arbitrary locations can be understood as approximation of vector-valued functions from the acquired samples and formulated using a reproducing kernel Hilbert space with a matrix-valued kernel defined by the spatial sensitivities of the receive coils. This establishes a formal connection between approximation theory and parallel imaging. Theoretical tools from approximation theory can then be used to understand reconstruction in k-space and to extend the analysis of the effects of samples selection beyond the traditional image-domain g-factor noise analysis to both noise amplification and approximation errors in k-space. This is demonstrated with numerical examples. (paper)

  6. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  7. WKB approximation in atomic physics

    International Nuclear Information System (INIS)

    Karnakov, Boris Mikhailovich

    2013-01-01

    Provides extensive coverage of the Wentzel-Kramers-Brillouin approximation and its applications. Presented as a sequence of problems with highly detailed solutions. Gives a concise introduction for calculating Rydberg states, potential barriers and quasistationary systems. This book has evolved from lectures devoted to applications of the Wentzel-Kramers-Brillouin- (WKB or quasi-classical) approximation and of the method of 1/N -expansion for solving various problems in atomic and nuclear physics. The intent of this book is to help students and investigators in this field to extend their knowledge of these important calculation methods in quantum mechanics. Much material is contained herein that is not to be found elsewhere. WKB approximation, while constituting a fundamental area in atomic physics, has not been the focus of many books. A novel method has been adopted for the presentation of the subject matter, the material is presented as a succession of problems, followed by a detailed way of solving them. The methods introduced are then used to calculate Rydberg states in atomic systems and to evaluate potential barriers and quasistationary states. Finally, adiabatic transition and ionization of quantum systems are covered.

  8. Liquid scintillation measurements of aqueous 14C or 3H containing samples in a toluene cocktail

    International Nuclear Information System (INIS)

    Engelmann, A.; Reinhard, G.

    1980-01-01

    On the basis of investigations of the ternary system toluene/methanol/water that composition of toluene/methanol scintillation cocktails has been determined, which allows liquid scintillation measurements of 14 C or 3 H containing samples in homogeneous distribution. Because of more pronounced quenching the optimum sample quantity was less for blood solutions extracted with a HClO 4 /H 2 O 2 mixture than for water. The effect of beta radiation energy has to be taken into account. (author)

  9. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  10. Determination of 210Pb and 210Po in soil or rock samples containing refractory matrices

    International Nuclear Information System (INIS)

    Jia Guogang; Torri, Giancarlo

    2007-01-01

    A new method has been developed for determination of 21 Pb and 21 Po in soil or rock samples containing refractory matrices. The samples were first fused with Na 2 CO 3 and Na 2 O 2 at 600 o C for pre-treatment and then 210 Pb and 210 Po were sequentially leached out at 200-250 o C with HNO 3 +HF, HClO 4 and HCl. About 10% of the leaching solution was used for 21 Po determination, carried out by spontaneous deposition of polonium on a silver disc from a weakly acidic solution that contained hydroxylamine hydrochloride, sodium citrate and 209 Po tracer, measurement being made by α-spectrometry. The remains of the leaching solution were used for determination of 21 Pb, conducted by precipitation as sulphate, purification with Na 2 S as PbS in 6 M ammonium acetate, separation from α-emitters by an anion-exchange resin column, source preparation as PbSO 4 , and measurement with a β-counter. The procedure has been checked with two certified IAEA reference materials, showing good agreement with the recommended values. The lower limits of detection for 1 g of analysed soil or rock samples were found to be 0.75 Bq kg -1 for 210 Po and 2.2 Bq kg -1 for 210 Pb. A variety of solid sample species analysed through use of the procedure gave average yields of 90.0±9.8% for 210 Po and 88.4±7.1% for 210 Pb

  11. Recommended Immunological Assays to Screen for Ricin-Containing Samples

    Directory of Open Access Journals (Sweden)

    Stéphanie Simon

    2015-11-01

    Full Text Available Ricin, a toxin from the plant Ricinus communis, is one of the most toxic biological agents known. Due to its availability, toxicity, ease of production and absence of curative treatments, ricin has been classified by the Centers for Disease Control and Prevention (CDC as category B biological weapon and it is scheduled as a List 1 compound in the Chemical Weapons Convention. An international proficiency test (PT was conducted to evaluate detection and quantification capabilities of 17 expert laboratories. In this exercise one goal was to analyse the laboratories’ capacity to detect and differentiate ricin and the less toxic, but highly homologuous protein R. communis agglutinin (RCA120. Six analytical strategies are presented in this paper based on immunological assays (four immunoenzymatic assays and two immunochromatographic tests. Using these immunological methods “dangerous” samples containing ricin and/or RCA120 were successfully identified. Based on different antibodies used the detection and quantification of ricin and RCA120 was successful. The ricin PT highlighted the performance of different immunological approaches that are exemplarily recommended for highly sensitive and precise quantification of ricin.

  12. Development of the relativistic impulse approximation

    International Nuclear Information System (INIS)

    Wallace, S.J.

    1985-01-01

    This talk contains three parts. Part I reviews the developments which led to the relativistic impulse approximation for proton-nucleus scattering. In Part II, problems with the impulse approximation in its original form - principally the low energy problem - are discussed and traced to pionic contributions. Use of pseudovector covariants in place of pseudoscalar ones in the NN amplitude provides more satisfactory low energy results, however, the difference between pseudovector and pseudoscalar results is ambiguous in the sense that it is not controlled by NN data. Only with further theoretical input can the ambiguity be removed. Part III of the talk presents a new development of the relativistic impulse approximation which is the result of work done in the past year and a half in collaboration with J.A. Tjon. A complete NN amplitude representation is developed and a complete set of Lorentz invariant amplitudes are calculated based on a one-meson exchange model and appropriate integral equations. A meson theoretical basis for the important pair contributions to proton-nucleus scattering is established by the new developments. 28 references

  13. Determination of radiocaesium in agriculture-related water samples containing suspended solids using gelling method

    International Nuclear Information System (INIS)

    Matsunami, Hisaya; Shin, Moono; Takahashi, Yoshihiko; Shinano, Takuro; Kitajima, Shiori; Tsuchiya, Takashi

    2015-01-01

    After the TEPCO Fukushima Dai-ichi Nuclear Power Plant accident in 2011, the radiocaesium, which flowed into the paddy fields via irrigation water, have been widely investigated. When the concentration of radiocaesium in the water samples containing suspended solids were directly measured using a high purity germanium detector with a 2 L marinelli beaker, the radiocaesium concentration might be overestimated due to the sedimentation of the suspended solids during the measurement time. In fact, the values obtained by the direct method were higher than those obtained by the filtering method and/or the gelling method in most of the agriculture-related water samples. We concluded that the gelling method using sodium polyacrylate can be widely adapted for the analysis of the total radiocaesium in the agriculture-related water samples because of its many advantage such as simple preparation procedure, accurate analysis values, excellent long-term stability of geometry and low operating cost. (author)

  14. Organic sulphur in macromolecular sedimentary organic matter. II. Analysis of distributions of sulphur-containing pyrolysis products using multivariate techniques

    NARCIS (Netherlands)

    Sinninghe Damsté, J.S.; Eglinton, T.I.; Pool, W.; Leeuw, J.W. de; Eijkel, G.; Boon, J.J.

    1992-01-01

    This study describes the analysis of sulphur-containing products from Curie-point pyrolysis (Py) of eighty-five samples (kerogens, bitumen, and petroleum asphaltenes and coals) using gas chromatography (GC) in combination with sulphur-selective detection. Peak areas of approximately forty individual

  15. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  16. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  17. Directional dependency of air sampling

    International Nuclear Information System (INIS)

    1994-01-01

    A field study was performed by Idaho State University-Environmental Monitoring Laboratory (EML) to examine the directional dependency of low-volume air samplers. A typical continuous low volume air sampler contains a sample head that is mounted on the sampler housing either horizontally through one of four walls or vertically on an exterior wall 'looking down or up.' In 1992, a field study was undertaken to estimate sampling error and to detect the directional effect of sampler head orientation. Approximately 1/2 mile downwind from a phosphate plant (continuous source of alpha activity), four samplers were positioned in identical orientation alongside one sampler configured with the sample head 'looking down'. At least five consecutive weekly samples were collected. The alpha activity, beta activity, and the Be-7 activity collected on the particulate filter were analyzed to determine sampling error. Four sample heads were than oriented to the four different horizontal directions. Samples were collected for at least five weeks. Analysis of the alpha data can show the effect of sampler orientation to a know near source term. Analysis of the beta and Be-7 activity shows the effect of sampler orientation to a ubiquitous source term

  18. Evaluation of environmental samples containing heavy hydrocarbon components in environmental forensic investigations

    Energy Technology Data Exchange (ETDEWEB)

    Raia, J.C.; Blakley, C.R.; Fuex, A.N.; Villalanti, D.C.; Fahrenthold, P.D. [Triton Anal Corp, Houston, TX (United States)

    2004-03-01

    This article presents a procedure to evaluate and characterize environmental samples containing mixtures of hydrocarbons over a wide boiling range of materials that include fuels and other products used in commerce. The range of the method extends to the higher boiling and heavier molecular weight hydrocarbon products in the range of motor oil, bunker fuel, and heavier residue materials. The procedure uses the analytical laboratory technique of high-temperature simulated distillation along with mathematical regression of the analytical data to estimate the relative contribution of individual products in mixtures of hydrocarbons present in environmental samples. An analytical technique to determine hydrocarbon-type distributions by gas chromatography-mass spectrometry with nitric oxide ionization spectrometry evaluation is also presented. This type of analysis allows complex hydrocarbon mixtures to be classified by their chemical composition, or types of hydrocarbons that include paraffins, cycloparaffins, monoaromatics, and polycyclic aromatic hydrocarbons. Characteristic hydrocarbon patterns for example, in the relative distribution of polycyclic aromatic hydrocarbons are valuable for determining the potential origin of materials present in environmental samples. These methods provide quantitative data for hydrocarbon components in mixtures as a function of boiling range and 'hydrocarbon fingerprints' of the types of materials present. This information is valuable in assessing environmental impacts of hydrocarbons at contaminated sites and establishing the liabilities and cost allocations for responsible parties.

  19. The buffer/container experiment design and construction report

    Energy Technology Data Exchange (ETDEWEB)

    Chandler, N.A.; Wan, A.W.L.; Roach, P.J

    1998-03-01

    The Buffer/Container Experiment was a full-scale in situ experiment, installed at a depth of 240 m in granitic rock at AECL's Underground Research Laboratory (URL). The experiment was designed to examine the performance of a compacted sand-bentonite buffer material under the influences of elevated temperature and in situ moisture conditions. Buffer material was compacted in situ into a 5-m-deep, 1.24-m-diameter borehole drilled into the floor of an excavation. A 2.3-m long heater, representative of a nuclear fuel waste container, was placed within the buffer, and instrumentation was installed to monitor changes in buffer moisture conditions, temperature and stress. The experiment was sealed at the top of the borehole and restrained against vertical displacement. Instrumentation in the rock monitored pore pressures, temperatures and rock displacement. The heater was operated at a constant power of 1200 W, which provided a heater skin temperature of approximately 85 degrees C. Experiment construction and installation required two years, followed by two and a half years of heater operation and two years of monitoring the rock conditions during cooling. The construction phase of the experiment included the design, construction and testing of a segmental heater and controller, geological and hydrogeological characterization of the rock, excavation of the experiment room, drilling of the emplacement borehole using high pressure water, mixing and in situ compaction of buffer material, installation of instrumentation in the rock, buffer and on the heater, and the construction of concrete curb and steel vertical restraint system at the top of emplacement borehole. Upon completion of the experiment, decommissioning sampling equipment was designed and constructed and sampling methods were developed which allowed approximately 2000 samples of buffer material to be taken over a 12-day period. Quality assurance procedures were developed for all aspects of experiment

  20. The buffer/container experiment design and construction report

    International Nuclear Information System (INIS)

    Chandler, N.A.; Wan, A.W.L.; Roach, P.J.

    1998-03-01

    The Buffer/Container Experiment was a full-scale in situ experiment, installed at a depth of 240 m in granitic rock at AECL's Underground Research Laboratory (URL). The experiment was designed to examine the performance of a compacted sand-bentonite buffer material under the influences of elevated temperature and in situ moisture conditions. Buffer material was compacted in situ into a 5-m-deep, 1.24-m-diameter borehole drilled into the floor of an excavation. A 2.3-m long heater, representative of a nuclear fuel waste container, was placed within the buffer, and instrumentation was installed to monitor changes in buffer moisture conditions, temperature and stress. The experiment was sealed at the top of the borehole and restrained against vertical displacement. Instrumentation in the rock monitored pore pressures, temperatures and rock displacement. The heater was operated at a constant power of 1200 W, which provided a heater skin temperature of approximately 85 degrees C. Experiment construction and installation required two years, followed by two and a half years of heater operation and two years of monitoring the rock conditions during cooling. The construction phase of the experiment included the design, construction and testing of a segmental heater and controller, geological and hydrogeological characterization of the rock, excavation of the experiment room, drilling of the emplacement borehole using high pressure water, mixing and in situ compaction of buffer material, installation of instrumentation in the rock, buffer and on the heater, and the construction of concrete curb and steel vertical restraint system at the top of emplacement borehole. Upon completion of the experiment, decommissioning sampling equipment was designed and constructed and sampling methods were developed which allowed approximately 2000 samples of buffer material to be taken over a 12-day period. Quality assurance procedures were developed for all aspects of experiment construction

  1. Hanford Site Environmental Surveillance Master Sampling Schedule for Calendar Year 2010

    Energy Technology Data Exchange (ETDEWEB)

    Bisping, Lynn E.

    2010-01-08

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by Pacific Northwest National Laboratory for the U.S. Department of Energy (DOE). Sampling is conducted to evaluate levels of radioactive and nonradioactive pollutants in the Hanford Site environs per regulatory requirements. This document contains the calendar year 2010 schedule for the routine collection of samples for the Surface Environmental Surveillance Project and the Drinking Water Monitoring Project. Each section includes sampling locations, sampling frequencies, sample types, and analyses to be performed. In some cases, samples are scheduled on a rotating basis. If a sample will not be collected in 2010, the anticipated year for collection is provided. Maps showing approximate sampling locations are included for media scheduled for collection in 2010.

  2. Investigation of fire at Council, Alaska: A release of approximately 3000 curies of tritium

    International Nuclear Information System (INIS)

    Jensen, G.A.; Martin, J.B.

    1988-04-01

    On September 6, 1987, about 6:00 a.m., a fire was discovered in the community building at Council, Alaska, where 12 radioluminescent (RL) light panels containing approximately 3000 Ci were stored. All of the tritium in the panels was released as a result of the fire. This report summarizes the recovery of the remains of the panels destroyed in the fire and investigations completed to evaluate the fire site for possible exposure of community residents or contamination by tritium release in the environment. Based on the analysis of urine samples obtained from individuals in the community and from Pacific Northwest Laboratory personnel participating in the recovery operation, no evidence of exposure to individuals was found. No tritium (above normal background) was found in water and vegetation samples obtained at various locations near the site. 12 figs., 3 tabs

  3. Cosmological models in globally geodesic coordinates. II. Near-field approximation

    International Nuclear Information System (INIS)

    Liu Hongya

    1987-01-01

    A near-field approximation dealing with the cosmological field near a typical freely falling observer is developed within the framework established in the preceding paper [J. Math. Phys. 28, xxxx(1987)]. It is found that for the matter-dominated era the standard cosmological model of general relativity contains the Newtonian cosmological model, proposed by Zel'dovich, as its near-field approximation in the observer's globally geodesic coordinate system

  4. Hanford Site Environmental Surveillance Master Sampling Schedule

    International Nuclear Information System (INIS)

    Bisping, L.E.

    1999-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest National Laboratory (PNNL) for the U.S. Department of Energy (DOE). Sampling is conducted to evaluate levels of radioactive and nonradioactive pollutants in the Hanford environs, as required in DOE Order 5400.1, ''General Environmental protection Program,'' and DOE Order 5400.5, ''Radiation Protection of the Public and the Environment.'' The sampling methods are described in the Environmental Monitoring Plan, United States Department of Energy, Richland Operations Office, DOE/RL-91-50, Rev.2, U.S. Department of Energy, Richland, Washington. This document contains the CY1999 schedules for the routine collection of samples for the Surface Environmental Surveillance Project (SESP) and Drinking Water Monitoring Project. Each section includes the sampling location, sample type, and analyses to be performed on the sample. In some cases, samples are scheduled on a rotating basis and may not be collected in 1999 in which case the anticipated year for collection is provided. In addition, a map is included for each media showing approximate sampling locations

  5. Approximating the ground state of gapped quantum spin systems

    Energy Technology Data Exchange (ETDEWEB)

    Michalakis, Spyridon [Los Alamos National Laboratory; Hamza, Eman [NON LANL; Nachtergaele, Bruno [NON LANL; Sims, Robert [NON LANL

    2009-01-01

    We consider quantum spin systems defined on finite sets V equipped with a metric. In typical examples, V is a large, but finite subset of Z{sup d}. For finite range Hamiltonians with uniformly bounded interaction terms and a unique, gapped ground state, we demonstrate a locality property of the corresponding ground state projector. In such systems, this ground state projector can be approximated by the product of observables with quantifiable supports. In fact, given any subset {chi} {contained_in} V the ground state projector can be approximated by the product of two projections, one supported on {chi} and one supported on {chi}{sup c}, and a bounded observable supported on a boundary region in such a way that as the boundary region increases, the approximation becomes better. Such an approximation was useful in proving an area law in one dimension, and this result corresponds to a multi-dimensional analogue.

  6. Hanford Site Environmental Surveillance Master Sampling Schedule for Calendar Year 2009

    Energy Technology Data Exchange (ETDEWEB)

    Bisping, Lynn E.

    2009-01-20

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest National Laboratory for the U.S. Department of Energy. Sampling is conducted to evaluate levels of radioactive and nonradioactive pollutants in the Hanford environs, as required in DOE Order 450.1 and DOE Order 5400.5. This document contains the calendar year 2009 schedule for the routine collection of samples for the Surface Environmental Surveillance Project and Drinking Water Monitoring Project. Each section includes sampling locations, sampling frequencies, sample types, and analyses to be performed. In some cases, samples are scheduled on a rotating basis. If a sample will not be collected in 2009, the anticipated year for collection is provided. Maps showing approximate sampling locations are included for media scheduled for collection in 2009.

  7. Fusion impulse containment

    International Nuclear Information System (INIS)

    Bohachevsky, I.O.

    1979-01-01

    The characteristics of impact fusion energy releases are not known sufficiently well to examine in detail specific containment vessel concepts or designs. Therefore it appears appropriate to formulate the impulse containment problem in general and to derive results in the form of explicit expressions from which magnitude estimates and parametric dependencies (trends) can be inferred conveniently and rapidly. In the following presentation we carry out this task using assumptions and approximations that are required to perform the analysis

  8. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay; Jo, Seongil; Nott, David; Shoemaker, Christine; Tempone, Raul

    2017-01-01

    is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  9. Hanford Site Environmental Surveillance Master Sampling Schedule

    International Nuclear Information System (INIS)

    Bisping, L.E.

    2000-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest National Laboratory (PNNL) for the U.S. Department of Energy (DOE). Sampling is conducted to evaluate levels of radioactive and nonradioactive pollutants in the Hanford environs, as required in DOE Order 5400.1, General Environmental Protection Program: and DOE Order 5400.5, Radiation Protection of the Public and the Environment. The sampling design is described in the Operations Office, Environmental Monitoring Plan, United States Department of Energy, Richland DOE/RL-91-50, Rev.2, U.S. Department of Energy, Richland, Washington. This document contains the CY 2000 schedules for the routine collection of samples for the Surface Environmental Surveillance Project (SESP) and Drinking Water Monitoring Project. Each section includes sampling locations, sample types, and analyses to be performed. In some cases, samples are scheduled on a rotating basis and may not be collected in 2000 in which case the anticipated year for collection is provided. In addition, a map showing approximate sampling locations is included for each media scheduled for collection

  10. Approximate Bayesian evaluations of measurement uncertainty

    Science.gov (United States)

    Possolo, Antonio; Bodnar, Olha

    2018-04-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.

  11. Approximate Boundaries for West Lake Landfill, Missouri, 2014, EPA REG 07

    Data.gov (United States)

    U.S. Environmental Protection Agency — This ESRI File Geodatabase Feature Class contains polygons for GIS depicting the approximate boundaries for West Lake Landfill (MOD079900932), Missouri, 2014, EPA...

  12. Approximated solutions to Born-Infeld dynamics

    International Nuclear Information System (INIS)

    Ferraro, Rafael; Nigro, Mauro

    2016-01-01

    The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.

  13. Approximated solutions to Born-Infeld dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Ferraro, Rafael [Instituto de Astronomía y Física del Espacio (IAFE, CONICET-UBA),Casilla de Correo 67, Sucursal 28, 1428 Buenos Aires (Argentina); Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina); Nigro, Mauro [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina)

    2016-02-01

    The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.

  14. On the mathematical treatment of the Born-Oppenheimer approximation

    International Nuclear Information System (INIS)

    Jecko, Thierry

    2014-01-01

    Motivated by the paper by Sutcliffe and Woolley [“On the quantum theory of molecules,” J. Chem. Phys. 137, 22A544 (2012)], we present the main ideas used by mathematicians to show the accuracy of the Born-Oppenheimer approximation for molecules. Based on mathematical works on this approximation for molecular bound states, in scattering theory, in resonance theory, and for short time evolution, we give an overview of some rigorous results obtained up to now. We also point out the main difficulties mathematicians are trying to overcome and speculate on further developments. The mathematical approach does not fit exactly to the common use of the approximation in Physics and Chemistry. We criticize the latter and comment on the differences, contributing in this way to the discussion on the Born-Oppenheimer approximation initiated by Sutcliffe and Woolley. The paper neither contains mathematical statements nor proofs. Instead, we try to make accessible mathematically rigourous results on the subject to researchers in Quantum Chemistry or Physics

  15. On the mathematical treatment of the Born-Oppenheimer approximation

    Energy Technology Data Exchange (ETDEWEB)

    Jecko, Thierry, E-mail: thierry.jecko@u-cergy.fr [AGM, UMR 8088 du CNRS, Université de Cergy-Pontoise, Département de mathématiques, site de Saint Martin, 2 avenue Adolphe Chauvin, F-95000 Pontoise (France)

    2014-05-15

    Motivated by the paper by Sutcliffe and Woolley [“On the quantum theory of molecules,” J. Chem. Phys. 137, 22A544 (2012)], we present the main ideas used by mathematicians to show the accuracy of the Born-Oppenheimer approximation for molecules. Based on mathematical works on this approximation for molecular bound states, in scattering theory, in resonance theory, and for short time evolution, we give an overview of some rigorous results obtained up to now. We also point out the main difficulties mathematicians are trying to overcome and speculate on further developments. The mathematical approach does not fit exactly to the common use of the approximation in Physics and Chemistry. We criticize the latter and comment on the differences, contributing in this way to the discussion on the Born-Oppenheimer approximation initiated by Sutcliffe and Woolley. The paper neither contains mathematical statements nor proofs. Instead, we try to make accessible mathematically rigourous results on the subject to researchers in Quantum Chemistry or Physics.

  16. Development of sample size allocation program using hypergeometric distribution

    International Nuclear Information System (INIS)

    Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik

    1996-01-01

    The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)

  17. Function approximation using combined unsupervised and supervised learning.

    Science.gov (United States)

    Andras, Peter

    2014-03-01

    Function approximation is one of the core tasks that are solved using neural networks in the context of many engineering problems. However, good approximation results need good sampling of the data space, which usually requires exponentially increasing volume of data as the dimensionality of the data increases. At the same time, often the high-dimensional data is arranged around a much lower dimensional manifold. Here we propose the breaking of the function approximation task for high-dimensional data into two steps: (1) the mapping of the high-dimensional data onto a lower dimensional space corresponding to the manifold on which the data resides and (2) the approximation of the function using the mapped lower dimensional data. We use over-complete self-organizing maps (SOMs) for the mapping through unsupervised learning, and single hidden layer neural networks for the function approximation through supervised learning. We also extend the two-step procedure by considering support vector machines and Bayesian SOMs for the determination of the best parameters for the nonlinear neurons in the hidden layer of the neural networks used for the function approximation. We compare the approximation performance of the proposed neural networks using a set of functions and show that indeed the neural networks using combined unsupervised and supervised learning outperform in most cases the neural networks that learn the function approximation using the original high-dimensional data.

  18. Evaluation of five sampling methods for Liposcelis entomophila (Enderlein) and L. decolor (Pearman) (Psocoptera: Liposcelididae) in steel bins containing wheat

    Science.gov (United States)

    An evaluation of five sampling methods for studying psocid population levels was conducted in two steel bins containing 32.6 metric tonnes of wheat in Manhattan, KS. Psocids were sampled using a 1.2-m open-ended trier, corrugated cardboard refuges placed on the underside of the bin hatch or the surf...

  19. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  20. Container for nuclear fuel powders

    International Nuclear Information System (INIS)

    Etheredge, B.F.; Larson, R.I.

    1982-01-01

    A critically safe container is disclosed for the storage and rapid discharge of enriched nuclear fuel material in powder form is disclosed. The container has a hollow, slab-shaped container body that has one critically safe dimension. A powder inlet is provided on one side wall of the body adjacent to a corner thereof and a powder discharge port is provided at another corner of the body approximately diagonal the powder inlet. Gas plenum for moving the powder during discharge are located along the side walls of the container adjacent the discharge port

  1. Post-Newtonian approximation of the maximum four-dimensional Yang-Mills gauge theory

    International Nuclear Information System (INIS)

    Smalley, L.L.

    1982-01-01

    We have calculated the post-Newtonian approximation of the maximum four-dimensional Yang-Mills theory proposed by Hsu. The theory contains torsion; however, torsion is not active at the level of the post-Newtonian approximation of the metric. Depending on the nature of the approximation, we obtain the general-relativistic values for the classical Robertson parameters (γ = β = 1), but deviations for the Nordtvedt effect and violations of post-Newtonian conservation laws. We conclude that in its present form the theory is not a viable theory of gravitation

  2. Containment analysis for the simultaneous detonation of two nuclear explosives

    International Nuclear Information System (INIS)

    Terhune, R.W.; Glenn, H.D.; Burton, D.E.; Rambo, J.T.

    1977-01-01

    The explosive phenomenology associated with the simultaneous detonation of two 2.2-kt nuclear explosives is examined. A comprehensive spatial-time pictorial of the resultant shock-wave phenomenology is given. The explosives were buried at depths of 200 m and 280 m, corresponding to a separation of approximately 4 final cavity radii. Constitutive relations for the surrounding medium were derived from the geophysical logs and core samples taken from an actual emplacement configuration at the Nevada Test Site. Past calculational studies indicate that successful containment may depend upon the development of a strong tangential-stress field (or ''containment cage'') surrounding the cavity at late times. A series of conditions that must be met to insure formation of this cage are presented. Calculational results, based on one- and two-dimensional finite-difference codes of continuum mechanics, describe how each condition has been fulfilled and illustrate the dynamic sequence of events important to the formation of the containment cage. They also indicate, at least for the geological site chosen, that two nuclear explosives do not combine to threaten containment

  3. Analytic approximations for the elastic moduli of two-phase materials

    DEFF Research Database (Denmark)

    Zhang, Z. J.; Zhu, Y. K.; Zhang, P.

    2017-01-01

    Based on the models of series and parallel connections of the two phases in a composite, analytic approximations are derived for the elastic constants (Young's modulus, shear modulus, and Poisson's ratio) of elastically isotropic two-phase composites containing second phases of various volume...

  4. Sequential function approximation on arbitrarily distributed point sets

    Science.gov (United States)

    Wu, Kailiang; Xiu, Dongbin

    2018-02-01

    We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.

  5. Investigation of the fire at the Uranium Enrichment Laboratory. Analysis of samples and pressurization experiment/analysis of container

    International Nuclear Information System (INIS)

    Akabori, Mitsuo; Minato, Kazuo; Watanabe, Kazuo

    1998-05-01

    To investigate the cause of the fire at the Uranium Enrichment Laboratory of the Tokai Research Establishment on November 20, 1997, samples of uranium metal waste and scattered residues were analyzed. At the same time the container lid that had been blown off was closely inspected, and the pressurization effects of the container were tested and analyzed. It was found that 1) the uranium metal waste mainly consisted of uranium metal, carbides and oxides, whose relative amounts were dependent on the particle size, 2) the uranium metal waste hydrolyzed to produce combustible gases such as methane and hydrogen, and 3) the lid of the outer container could be blown off by an explosive rise of the inner pressure caused by combustion of inflammable gas mixture. (author)

  6. 7 CFR 201.42 - Small containers.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Small containers. 201.42 Section 201.42 Agriculture... REGULATIONS Sampling in the Administration of the Act § 201.42 Small containers. In sampling seed in small containers that it is not practical to sample as required in § 201.41, a portion of one unopened container or...

  7. Work inside ocean freight containers--personal exposure to off-gassing chemicals.

    Science.gov (United States)

    Svedberg, Urban; Johanson, Gunnar

    2013-11-01

    More than 500 million ocean freight container units are shipped annually between countries and continents. Residual levels of fumigants, as well as naturally occurring off-gassing chemicals emanating from the goods, constitute safety risks, which may affect uniformed workers upon entering the container. The aim of this study was to assess workers' exposure during stripping of containers and is the first study of its kind. First, an experimental tracer gas method was investigated to determine its usefulness to approximate real exposures from gaseous fumigants and off-gassing volatile organic compounds (VOCs). Nitrous oxide was injected and left to distribute in the closed containers. The distribution of the tracer gas and initial (arrival) concentrations of off-gassing volatiles were measured prior to opening the containers. Second, personal exposure (breathing zone) and work zone air monitoring of both tracer gas and VOCs were carried out during stripping. Adsorbent tubes, bag samples, and direct-readings instruments (photoionization detector and Fourier transform infrared spectrometry) were used. The distribution studies with nitrous oxide, and the high correlation between the former and VOCs (r(2) ~ 0.8) during stripping, showed that the tracer gas method may well be used to approximate real exposures in containers. The average breathing zone and work zone concentrations during stripping of naturally ventilated 40-foot containers were 1-7% of the arrival concentrations; however, peaks up to 70% were seen during opening. Even if average exposures during stripping are significantly lower than arrival concentrations, they may still represent serious violations of occupational exposure limits in high-risk containers. The results from this and previous studies illustrate the need to establish practices for the safe handling of ocean freight containers. Until comprehensive recommendations are in place, personnel that need to enter such containers should, in addition to

  8. Determination of the complex refractive index segments of turbid sample with multispectral spatially modulated structured light and models approximation

    Science.gov (United States)

    Meitav, Omri; Shaul, Oren; Abookasis, David

    2017-09-01

    Spectral data enabling the derivation of a biological tissue sample's complex refractive index (CRI) can provide a range of valuable information in the clinical and research contexts. Specifically, changes in the CRI reflect alterations in tissue morphology and chemical composition, enabling its use as an optical marker during diagnosis and treatment. In the present work, we report a method for estimating the real and imaginary parts of the CRI of a biological sample using Kramers-Kronig (KK) relations in the spatial frequency domain. In this method, phase-shifted sinusoidal patterns at single high spatial frequency are serially projected onto the sample surface at different near-infrared wavelengths while a camera mounted normal to the sample surface acquires the reflected diffuse light. In the offline analysis pipeline, recorded images at each wavelength are converted to spatial phase maps using KK analysis and are then calibrated against phase-models derived from diffusion approximation. The amplitude of the reflected light, together with phase data, is then introduced into Fresnel equations to resolve both real and imaginary segments of the CRI at each wavelength. The technique was validated in tissue-mimicking phantoms with known optical parameters and in mouse models of ischemic injury and heat stress. Experimental data obtained indicate variations in the CRI among brain tissue suffering from injury. CRI fluctuations correlated with alterations in the scattering and absorption coefficients of the injured tissue are demonstrated. This technique for deriving dynamic changes in the CRI of tissue may be further developed as a clinical diagnostic tool and for biomedical research applications. To the best of our knowledge, this is the first report of the estimation of the spectral CRI of a mouse head following injury obtained in the spatial frequency domain.

  9. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  10. Corrosion Testing of 304L SS 3013 Inner Container and Teardrop Samples

    Energy Technology Data Exchange (ETDEWEB)

    Tokash, Justin Charles [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hill, Mary Ann [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lillard, Scott [Univ. of Akron, OH (United States); Joyce, Stephen Anthony [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tegtmeier, Eric Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Berg, John M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Veirs, Douglas Kirk [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Worl, Laura Ann [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-06-27

    The Department of Energy (DOE) 3013 Standard specifies a minimum of two containers to be used for the storage of plutonium-bearing materials containing at least 30 wt.% plutonium and uranium. Three nested containers are typically used, the outer, inner, and convenience containers, shown in Figure 1. Both the outer and inner containers are sealed with a weld while the innermost convenience container must not be sealed. Lifetime of the containers is expected to be fifty years. The containers are fabricated of austenitic stainless steels (SS) due to their high corrosion resistance. Potential failure mechanisms of the storage containers have been examined by Kolman and Lillard et al.

  11. Approximate number word knowledge before the cardinal principle.

    Science.gov (United States)

    Gunderson, Elizabeth A; Spaepen, Elizabet; Levine, Susan C

    2015-02-01

    Approximate number word knowledge-understanding the relation between the count words and the approximate magnitudes of sets-is a critical piece of knowledge that predicts later math achievement. However, researchers disagree about when children first show evidence of approximate number word knowledge-before, or only after, they have learned the cardinal principle. In two studies, children who had not yet learned the cardinal principle (subset-knowers) produced sets in response to number words (verbal comprehension task) and produced number words in response to set sizes (verbal production task). As evidence of approximate number word knowledge, we examined whether children's numerical responses increased with increasing numerosity of the stimulus. In Study 1, subset-knowers (ages 3.0-4.2 years) showed approximate number word knowledge above their knower-level on both tasks, but this effect did not extend to numbers above 4. In Study 2, we collected data from a broader age range of subset-knowers (ages 3.1-5.6 years). In this sample, children showed approximate number word knowledge on the verbal production task even when only examining set sizes above 4. Across studies, children's age predicted approximate number word knowledge (above 4) on the verbal production task when controlling for their knower-level, study (1 or 2), and parents' education, none of which predicted approximation ability. Thus, children can develop approximate knowledge of number words up to 10 before learning the cardinal principle. Furthermore, approximate number word knowledge increases with age and might not be closely related to the development of exact number word knowledge. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. On sampling and modeling complex systems

    International Nuclear Information System (INIS)

    Marsili, Matteo; Mastromatteo, Iacopo; Roudi, Yasser

    2013-01-01

    The study of complex systems is limited by the fact that only a few variables are accessible for modeling and sampling, which are not necessarily the most relevant ones to explain the system behavior. In addition, empirical data typically undersample the space of possible states. We study a generic framework where a complex system is seen as a system of many interacting degrees of freedom, which are known only in part, that optimize a given function. We show that the underlying distribution with respect to the known variables has the Boltzmann form, with a temperature that depends on the number of unknown variables. In particular, when the influence of the unknown degrees of freedom on the known variables is not too irregular, the temperature decreases as the number of variables increases. This suggests that models can be predictable only when the number of relevant variables is less than a critical threshold. Concerning sampling, we argue that the information that a sample contains on the behavior of the system is quantified by the entropy of the frequency with which different states occur. This allows us to characterize the properties of maximally informative samples: within a simple approximation, the most informative frequency size distributions have power law behavior and Zipf’s law emerges at the crossover between the under sampled regime and the regime where the sample contains enough statistics to make inferences on the behavior of the system. These ideas are illustrated in some applications, showing that they can be used to identify relevant variables or to select the most informative representations of data, e.g. in data clustering. (paper)

  13. Mobile/Modular BSL-4 Facilities for Meeting Restricted Earth Return Containment Requirements

    Science.gov (United States)

    Calaway, M. J.; McCubbin, F. M.; Allton, J. H.; Zeigler, R. A.; Pace, L. F.

    2017-01-01

    NASA robotic sample return missions designated Category V Restricted Earth Return by the NASA Planetary Protection Office require sample containment and biohazard testing in a receiving laboratory as directed by NASA Procedural Requirement (NPR) 8020.12D - ensuring the preservation and protection of Earth and the sample. Currently, NPR 8020.12D classifies Restricted Earth Return for robotic sample return missions from Mars, Europa, and Enceladus with the caveat that future proposed mission locations could be added or restrictions lifted on a case by case basis as scientific knowledge and understanding of biohazards progresses. Since the 1960s, sample containment from an unknown extraterrestrial biohazard have been related to the highest containment standards and protocols known to modern science. Today, Biosafety Level (BSL) 4 standards and protocols are used to study the most dangerous high-risk diseases and unknown biological agents on Earth. Over 30 BSL-4 facilities have been constructed worldwide with 12 residing in the United States; of theses, 8 are operational. In the last two decades, these brick and mortar facilities have cost in the hundreds of millions of dollars dependent on the facility requirements and size. Previous mission concept studies for constructing a NASA sample receiving facility with an integrated BSL-4 quarantine and biohazard testing facility have also been estimated in the hundreds of millions of dollars. As an alternative option, we have recently conducted an initial trade study for constructing a mobile and/or modular sample containment laboratory that would meet all BSL-4 and planetary protection standards and protocols at a faction of the cost. Mobile and modular BSL-2 and 3 facilities have been successfully constructed and deployed world-wide for government testing of pathogens and pharmaceutical production. Our study showed that a modular BSL-4 construction could result in approximately 90% cost reduction when compared to

  14. Respiratory Motion Correction for Compressively Sampled Free Breathing Cardiac MRI Using Smooth l1-Norm Approximation

    Directory of Open Access Journals (Sweden)

    Muhammad Bilal

    2018-01-01

    Full Text Available Transformed domain sparsity of Magnetic Resonance Imaging (MRI has recently been used to reduce the acquisition time in conjunction with compressed sensing (CS theory. Respiratory motion during MR scan results in strong blurring and ghosting artifacts in recovered MR images. To improve the quality of the recovered images, motion needs to be estimated and corrected. In this article, a two-step approach is proposed for the recovery of cardiac MR images in the presence of free breathing motion. In the first step, compressively sampled MR images are recovered by solving an optimization problem using gradient descent algorithm. The L1-norm based regularizer, used in optimization problem, is approximated by a hyperbolic tangent function. In the second step, a block matching algorithm, known as Adaptive Rood Pattern Search (ARPS, is exploited to estimate and correct respiratory motion among the recovered images. The framework is tested for free breathing simulated and in vivo 2D cardiac cine MRI data. Simulation results show improved structural similarity index (SSIM, peak signal-to-noise ratio (PSNR, and mean square error (MSE with different acceleration factors for the proposed method. Experimental results also provide a comparison between k-t FOCUSS with MEMC and the proposed method.

  15. Preparation of Biological Samples Containing Metoprolol and Bisoprolol for Applying Methods for Quantitative Analysis

    Directory of Open Access Journals (Sweden)

    Corina Mahu Ştefania

    2015-12-01

    Full Text Available Arterial hypertension is a complex disease with many serious complications, representing a leading cause of mortality. Selective beta-blockers such as metoprolol and bisoprolol are frequently used in the management of hypertension. Numerous analytical methods have been developed for the determination of these substances in biological fluids, such as liquid chromatography coupled with mass spectrometry, gas chromatography coupled with mass spectrometry, high performance liquid chromatography. Due to the complex composition of biological fluids a biological sample pre-treatment before the use of the method for quantitative determination is required in order to remove proteins and potential interferences. The most commonly used methods for processing biological samples containing metoprolol and bisoprolol were identified through a thorough literature search using PubMed, ScienceDirect, and Willey Journals databases. Articles published between years 2005-2015 were reviewed. Protein precipitation, liquid-liquid extraction and solid phase extraction are the main techniques for the extraction of these drugs from plasma, serum, whole blood and urine samples. In addition, numerous other techniques have been developed for the preparation of biological samples, such as dispersive liquid-liquid microextraction, carrier-mediated liquid phase microextraction, hollow fiber-protected liquid phase microextraction, on-line molecularly imprinted solid phase extraction. The analysis of metoprolol and bisoprolol in human plasma, urine and other biological fluids provides important information in clinical and toxicological trials, thus requiring the application of appropriate extraction techniques for the detection of these antihypertensive substances at nanogram and picogram levels.

  16. Approximating tunneling rates in multi-dimensional field spaces

    Energy Technology Data Exchange (ETDEWEB)

    Masoumi, Ali; Olum, Ken D.; Wachter, Jeremy M., E-mail: ali@cosmos.phy.tufts.edu, E-mail: kdo@cosmos.phy.tufts.edu, E-mail: Jeremy.Wachter@tufts.edu [Institute of Cosmology, Department of Physics and Astronomy, Tufts University, Medford, MA 02155 (United States)

    2017-10-01

    Quantum mechanics makes the otherwise stable vacua of a theory metastable through the nucleation of bubbles of the new vacuum. This in turn causes a first order phase transition. These cosmological phase transitions may have played an important role in settling our universe into its current vacuum, and they may also happen in future. The most important frameworks where vacuum decay happens contain a large number of fields. Unfortunately, calculating the tunneling rates in these models is very time-consuming. In this paper we present a simple approximation for the tunneling rate by reducing it to a one-field problem which is easy to calculate. We demonstrate the validity of this approximation using our recent code 'Anybubble' for several classes of potentials.

  17. Phthalate metabolites in urine samples from Danish children and correlations with phthalates in dust samples from their homes and daycare centers

    DEFF Research Database (Denmark)

    Langer, S.; Bekö, Gabriel; Weschler, Charles J.

    2013-01-01

    Around the world humans use products that contain phthalates, and human exposure to certain of these phthalates has been associated with various adverse health effects. The aim of the present study has been to determine the concentrations of the metabolites of diethyl phthalate (DEP), di......(n-butyl) phthalate (DnBP), di(iso-butyl) phthalate (DiBP), butyl benzyl phthalate (BBzP) and di(2-ethylhexyl) phthalate (DEHP) in urine samples from 441 Danish children (3–6 years old). These children were subjects in the Danish Indoor Environment and Children's Health study. As part of each child's medical...... examination, a sample from his or her first morning urination was collected. These samples were subsequently analyzed for metabolites of the targeted phthalates. The measured concentrations of each metabolite were approximately log-normally distributed, and the metabolite concentrations significantly...

  18. Dissolution Of 3013-DE Sample 10-16

    International Nuclear Information System (INIS)

    Taylor-Pashow, K.

    2011-01-01

    The HB-Line Facility has a long-term mission to dissolve and disposition legacy fissile materials. HB-Line dissolves plutonium dioxide (PuO 2 ) from K-Area parting support of the 3013 Destructive Examination (DE) program. The PuO 2 -bearing solids originate from a variety of unit operations and processing facilities, but all of the material is assumed to be high-fired (i.e., calcined in air for a minimum of two hours at (ge) 750 C). The Savannah River National Laboratory (SRNL) conducted dissolution flowsheet studies on 3013 DE Sample 10-16 (can R610826), which contains weapons-grade plutonium (Pu) as the fissile material. The dissolution flowsheet study was performed for 4 hours at 108 C on unwashed material using 12 M nitric acid (HNO 3 ) containing 0.20 M potassium fluoride (KF). After 4 hours at 108 C, the 239 Pu Equivalent concentration was 32.5 g/L (gamma, 5.0% uncertainty). The insoluble residue comprised 9.88 wt % of the initial bulk weight, and contained 5.31-5.95 wt % of the initial Pu. The residue contained Pu in the highest concentration, followed by tungsten (W). Analyses detected 2,770 mg/L chloride (Cl - ) in the final dissolver solution (3.28 wt %), which is significantly lower than the amount of Cl - detected by prompt gamma (9.86 wt %) and the 3013 DE Surveillance program (14.7 wt %). A low bias in chloride measurement is anticipated due to volatilization during the experiment. Gas generation studies found approximately 60 mL of gas per gram of sample produced during the first 30 minutes of dissolution. Little to no gas was produced after the first 30 minutes. Hydrogen gas (H 2 ) was not detected in the sample. Based on detection limits and accounting for dilution, the generated gas contained 2 , which is well below the 4.0 vol % flammability limit for H 2 in air. Filtration of the dissolver solution occurred readily. When aluminum nitrate nonahydrate (ANN) was added to the filtered dissolver solution at a 3:1 Al:F molar ratio, and stored at room

  19. Diversity comparison of Pareto front approximations in many-objective optimization.

    Science.gov (United States)

    Li, Miqing; Yang, Shengxiang; Liu, Xiaohui

    2014-12-01

    Diversity assessment of Pareto front approximations is an important issue in the stochastic multiobjective optimization community. Most of the diversity indicators in the literature were designed to work for any number of objectives of Pareto front approximations in principle, but in practice many of these indicators are infeasible or not workable when the number of objectives is large. In this paper, we propose a diversity comparison indicator (DCI) to assess the diversity of Pareto front approximations in many-objective optimization. DCI evaluates relative quality of different Pareto front approximations rather than provides an absolute measure of distribution for a single approximation. In DCI, all the concerned approximations are put into a grid environment so that there are some hyperboxes containing one or more solutions. The proposed indicator only considers the contribution of different approximations to nonempty hyperboxes. Therefore, the computational cost does not increase exponentially with the number of objectives. In fact, the implementation of DCI is of quadratic time complexity, which is fully independent of the number of divisions used in grid. Systematic experiments are conducted using three groups of artificial Pareto front approximations and seven groups of real Pareto front approximations with different numbers of objectives to verify the effectiveness of DCI. Moreover, a comparison with two diversity indicators used widely in many-objective optimization is made analytically and empirically. Finally, a parametric investigation reveals interesting insights of the division number in grid and also offers some suggested settings to the users with different preferences.

  20. Quantitative portable gamma-spectroscopy sample analysis for non-standard sample geometries

    International Nuclear Information System (INIS)

    Ebara, S.B.

    1998-01-01

    Utilizing a portable spectroscopy system, a quantitative method for analysis of samples containing a mixture of fission and activation products in nonstandard geometries was developed. This method was not developed to replace other methods such as Monte Carlo or Discrete Ordinates but rather to offer an alternative rapid solution. The method can be used with various sample and shielding configurations where analysis on a laboratory based gamma-spectroscopy system is impractical. The portable gamma-spectroscopy method involves calibration of the detector and modeling of the sample and shielding to identify and quantify the radionuclides present in the sample. The method utilizes the intrinsic efficiency of the detector and the unattenuated gamma fluence rate at the detector surface per unit activity from the sample to calculate the nuclide activity and Minimum Detectable Activity (MDA). For a complex geometry, a computer code written for shielding applications (MICROSHIELD) is utilized to determine the unattenuated gamma fluence rate per unit activity at the detector surface. Lastly, the method is only applicable to nuclides which emit gamma-rays and cannot be used for pure beta or alpha emitters. In addition, if sample self absorption and shielding is significant, the attenuation will result in high MDA's for nuclides which solely emit low energy gamma-rays. The following presents the analysis technique and presents verification results using actual experimental data, rather than comparisons to other approximations such as Monte Carlo techniques, to demonstrate the accuracy of the method given a known geometry and source term. (author)

  1. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  2. Role of eikonal approximation in the infrared domain

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, H; Sharma, S K [Saha Inst. of Nuclear Physics, Calcutta (India); Mallik, S [Bern Univ. (Switzerland). Inst. fuer Theoretische Physik

    1977-01-31

    It is shown that the infrared limit of amplitudes for ladder diagrams in spinor electrodynamics is given by eikonal approximation correctly upto terms of relative O(/t/sup(1/2)/ssup(1/2)) only if one also makes the 'small-angle assumption'. Leading corrections to eikonal amplitude contain s-channel poles which do not have Coulomb analogue. For fixed angle scattering the leading infrared contribution of ladder diagrams is obtained.

  3. Hanford Site Environmental Surveillance Master Sampling Schedule for Calendar Year 2008

    Energy Technology Data Exchange (ETDEWEB)

    Bisping, Lynn E.

    2008-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by Pacific Northwest National Laboratory for the U.S. Department of Energy. Sampling is conducted to evaluate levels of radioactive and nonradioactive pollutants in the Hanford environs, as required in DOE Order 450.1, "Environmental Protection Program," and DOE Order 5400.5, "Radiation Protection of the Public and the Environment." The environmental surveillance sampling design is described in the "Hanford Site Environmental Monitoring Plan, United States Department of Energy, Richland Operations Office." This document contains the calendar year 2008 schedule for the routine collection of samples for the Surface Environmental Surveillance Project and Drinking Water Monitoring Project. Each section includes sampling locations, sampling frequencies, sample types, and analyses to be performed. In some cases, samples are scheduled on a rotating basis. If a sample will not be collected in 2008, the anticipated year for collection is provided. Maps showing approximate sampling locations are included for media scheduled for collection in 2008.

  4. Calculation of the MSD two-step process with the sudden approximation

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Shiro [Tohoku Univ., Sendai (Japan). Dept. of Physics; Kawano, Toshihiko [Kyushu Univ., Advanced Energy Engineering Science, Kasuga, Fukuoka (Japan)

    2000-03-01

    A calculation of the two-step process with the sudden approximation is described. The Green's function which connects the one-step matrix element to the two-step one is represented in {gamma}-space to avoid the on-energy-shell approximation. Microscopically calculated two-step cross sections are averaged together with an appropriate level density to give a two-step cross section. The calculated cross sections are compared with the experimental data, however the calculation still contains several simplifications at this moment. (author)

  5. Modulated Pade approximant

    International Nuclear Information System (INIS)

    Ginsburg, C.A.

    1980-01-01

    In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)

  6. A Bayesian Method for Weighted Sampling

    OpenAIRE

    Lo, Albert Y.

    1993-01-01

    Bayesian statistical inference for sampling from weighted distribution models is studied. Small-sample Bayesian bootstrap clone (BBC) approximations to the posterior distribution are discussed. A second-order property for the BBC in unweighted i.i.d. sampling is given. A consequence is that BBC approximations to a posterior distribution of the mean and to the sampling distribution of the sample average, can be made asymptotically accurate by a proper choice of the random variables that genera...

  7. Cosmological models constructed by van der Waals fluid approximation and volumetric expansion

    Science.gov (United States)

    Samanta, G. C.; Myrzakulov, R.

    The universe modeled with van der Waals fluid approximation, where the van der Waals fluid equation of state contains a single parameter ωv. Analytical solutions to the Einstein’s field equations are obtained by assuming the mean scale factor of the metric follows volumetric exponential and power-law expansions. The model describes a rapid expansion where the acceleration grows in an exponential way and the van der Waals fluid behaves like an inflation for an initial epoch of the universe. Also, the model describes that when time goes away the acceleration is positive, but it decreases to zero and the van der Waals fluid approximation behaves like a present accelerated phase of the universe. Finally, it is observed that the model contains a type-III future singularity for volumetric power-law expansion.

  8. ANALYSIS OF TANK 28F SALTCAKE CORE SAMPLES FTF-456 - 467

    Energy Technology Data Exchange (ETDEWEB)

    Martino, C; Daniel McCabe, D; Tommy Edwards, T; Ralph Nichols, R

    2007-02-28

    Twelve LM-75 core samplers from Tank 28F sampling were received by SRNL for saltcake characterization. Of these, nine samplers contained mixtures of free liquid and saltcake, two contained only liquid, and one was empty. The saltcake contents generally appeared wet. A summary of the major tasks performed in this work are as follows: (1) Individual saltcake segments were extruded from the samplers and separated into saltcake and free liquid portions. (2) Free liquids were analyzed to estimate the amount of traced drill-string fluid contained in the samples. (3) The saltcake from each individual segment was homogenized, followed by analysis in duplicate. The analysis used more cost-effective and bounding radiochemical analyses rather than using the full Saltstone WAC suite. (4) A composite was created using an approximately equal percentage of each segment's saltcake contents. Supernatant liquid formed upon creation of the composite was decanted prior to use of the composite, but the composite was not drained. (5) A dissolution test was performed on the sample by contacting the composite with water at a 4:1 mass ratio of water to salt. The resulting soluble and insoluble fractions were analyzed. Analysis focused on a large subset of the Saltstone WAC constituents.

  9. Finite approximations in fluid mechanics

    International Nuclear Information System (INIS)

    Hirschel, E.H.

    1986-01-01

    This book contains twenty papers on work which was conducted between 1983 and 1985 in the Priority Research Program ''Finite Approximations in Fluid Mechanics'' of the German Research Society (Deutsche Forschungsgemeinschaft). Scientists from numerical mathematics, fluid mechanics, and aerodynamics present their research on boundary-element methods, factorization methods, higher-order panel methods, multigrid methods for elliptical and parabolic problems, two-step schemes for the Euler equations, etc. Applications are made to channel flows, gas dynamical problems, large eddy simulation of turbulence, non-Newtonian flow, turbomachine flow, zonal solutions for viscous flow problems, etc. The contents include: multigrid methods for problems from fluid dynamics, development of a 2D-Transonic Potential Flow Solver; a boundary element spectral method for nonstationary viscous flows in 3 dimensions; navier-stokes computations of two-dimensional laminar flows in a channel with a backward facing step; calculations and experimental investigations of the laminar unsteady flow in a pipe expansion; calculation of the flow-field caused by shock wave and deflagration interaction; a multi-level discretization and solution method for potential flow problems in three dimensions; solutions of the conservation equations with the approximate factorization method; inviscid and viscous flow through rotating meridional contours; zonal solutions for viscous flow problems

  10. Approximation of the Monte Carlo Sampling Method for Reliability Analysis of Structures

    Directory of Open Access Journals (Sweden)

    Mahdi Shadab Far

    2016-01-01

    Full Text Available Structural load types, on the one hand, and structural capacity to withstand these loads, on the other hand, are of a probabilistic nature as they cannot be calculated and presented in a fully deterministic way. As such, the past few decades have witnessed the development of numerous probabilistic approaches towards the analysis and design of structures. Among the conventional methods used to assess structural reliability, the Monte Carlo sampling method has proved to be very convenient and efficient. However, it does suffer from certain disadvantages, the biggest one being the requirement of a very large number of samples to handle small probabilities, leading to a high computational cost. In this paper, a simple algorithm was proposed to estimate low failure probabilities using a small number of samples in conjunction with the Monte Carlo method. This revised approach was then presented in a step-by-step flowchart, for the purpose of easy programming and implementation.

  11. An Emulator Toolbox to Approximate Radiative Transfer Models with Statistical Learning

    Directory of Open Access Journals (Sweden)

    Juan Pablo Rivera

    2015-07-01

    Full Text Available Physically-based radiative transfer models (RTMs help in understanding the processes occurring on the Earth’s surface and their interactions with vegetation and atmosphere. When it comes to studying vegetation properties, RTMs allows us to study light interception by plant canopies and are used in the retrieval of biophysical variables through model inversion. However, advanced RTMs can take a long computational time, which makes them unfeasible in many real applications. To overcome this problem, it has been proposed to substitute RTMs through so-called emulators. Emulators are statistical models that approximate the functioning of RTMs. Emulators are advantageous in real practice because of the computational efficiency and excellent accuracy and flexibility for extrapolation. We hereby present an “Emulator toolbox” that enables analysing multi-output machine learning regression algorithms (MO-MLRAs on their ability to approximate an RTM. The toolbox is included in the free-access ARTMO’s MATLAB suite for parameter retrieval and model inversion and currently contains both linear and non-linear MO-MLRAs, namely partial least squares regression (PLSR, kernel ridge regression (KRR and neural networks (NN. These MO-MLRAs have been evaluated on their precision and speed to approximate the soil vegetation atmosphere transfer model SCOPE (Soil Canopy Observation, Photochemistry and Energy balance. SCOPE generates, amongst others, sun-induced chlorophyll fluorescence as the output signal. KRR and NN were evaluated as capable of reconstructing fluorescence spectra with great precision. Relative errors fell below 0.5% when trained with 500 or more samples using cross-validation and principal component analysis to alleviate the underdetermination problem. Moreover, NN reconstructed fluorescence spectra about 50-times faster and KRR about 800-times faster than SCOPE. The Emulator toolbox is foreseen to open new opportunities in the use of advanced

  12. Function approximation of tasks by neural networks

    International Nuclear Information System (INIS)

    Gougam, L.A.; Chikhi, A.; Mekideche-Chafa, F.

    2008-01-01

    For several years now, neural network models have enjoyed wide popularity, being applied to problems of regression, classification and time series analysis. Neural networks have been recently seen as attractive tools for developing efficient solutions for many real world problems in function approximation. The latter is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. In a previous contribution, we have used a well known simplified architecture to show that it provides a reasonably efficient, practical and robust, multi-frequency analysis. We have investigated the universal approximation theory of neural networks whose transfer functions are: sigmoid (because of biological relevance), Gaussian and two specified families of wavelets. The latter have been found to be more appropriate to use. The aim of the present contribution is therefore to use a m exican hat wavelet a s transfer function to approximate different tasks relevant and inherent to various applications in physics. The results complement and provide new insights into previously published results on this problem

  13. Quasi-planar elemental clusters in pair interactions approximation

    Directory of Open Access Journals (Sweden)

    Chkhartishvili Levan

    2016-01-01

    Full Text Available The pair-interactions approximation, when applied to describe elemental clusters, only takes into account bonding between neighboring atoms. According to this approach, isomers of wrapped forms of 2D clusters – nanotubular and fullerene-like structures – and truly 3D clusters, are generally expected to be more stable than their quasi-planar counterparts. This is because quasi-planar clusters contain more peripheral atoms with dangling bonds and, correspondingly, fewer atoms with saturated bonds. However, the differences in coordination numbers between central and peripheral atoms lead to the polarization of bonds. The related corrections to the molar binding energy can make small, quasi-planar clusters more stable than their 2D wrapped allotropes and 3D isomers. The present work provides a general theoretical frame for studying the relative stability of small elemental clusters within the pair interactions approximation.

  14. LMI-based stability analysis of fuzzy-model-based control systems using approximated polynomial membership functions.

    Science.gov (United States)

    Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles

    2011-06-01

    Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.

  15. Analytical methods of leakage rate estimation from a containment under a LOCA

    International Nuclear Information System (INIS)

    Chun, M.H.

    1981-01-01

    Three most outstanding maximum flow rate formulas are identified from many existing models. Outlines of the three limiting mass flow rate models are given along with computational procedures to estimate approximate amount of fission products released from a containment to environment for a given characteristic hole size for containment-isolation failure and containment pressure and temperature under a loss of coolant accident. Sample calculations are performed using the critical ideal gas flow rate model and the Moody's graphs for the maximum two-phase flow rates, and the results are compared with the values obtained from then mass leakage rate formula of CONTEMPT-LT code for converging nozzle and sonic flow. It is shown that the critical ideal gas flow rate formula gives almost comparable results as one can obtain from the Moody's model. It is also found that a more conservative approach to estimate leakage rate from a containment under a LOCA is to use the maximum ideal gas flow rate equation rather than the mass leakage rate formula of CONTEMPT-LT. (author)

  16. Double-contained receiver tank 244-TX, grab samples, 244TX-97-1 through 244TX-97-3 analytical results for the final report

    International Nuclear Information System (INIS)

    Esch, R.A.

    1997-01-01

    This document is the final report for the double-contained receiver tank (DCRT) 244-TX grab samples. Three grabs samples were collected from riser 8 on May 29, 1997. Analyses were performed in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO). The analytical results are presented in a table

  17. Separation and recovery of Cm from Cm-Pu mixed oxide samples containing Am impurity

    International Nuclear Information System (INIS)

    Hirokazu Hayashi; Hiromichi Hagiya; Mitsuo Akabori; Yasuji Morita; Kazuo Minato

    2013-01-01

    Curium was separated and recovered as an oxalate from a Cm-Pu mixed oxide which had been a 244 Cm oxide sample prepared more than 40 years ago and the ratio of 244 Cm to 240 Pu was estimated to 0.2:0.8. Radiochemical analyses of the solution prepared by dissolving the Cm-Pu mixed oxide in nitric acid revealed that the oxide contained about 1 at% of 243 Am impurity. To obtain high purity curium solution, plutonium and americium were removed from the solution by an anion exchange method and by chromatographic separation using tertiary pyridine resin embedded in silica beads with nitric acid/methanol mixed solution, respectively. Curium oxalate, a precursor compound of curium oxide, was prepared from the purified curium solution. 11.9 mg of Cm oxalate having some amounts of impurities, which are 243 Am (5.4 at%) and 240 Pu (0.3 at%) was obtained without Am removal procedure. Meanwhile, 12.0 mg of Cm oxalate (99.8 at% over actinides) was obtained with the procedure including Am removals. Both of the obtained Cm oxalate sample were supplied for the syntheses and measurements of the thermochemical properties of curium compounds. (author)

  18. Application of a non-integer Bessel uniform approximation to inelastic molecular collisions

    International Nuclear Information System (INIS)

    Connor, J.N.L.; Mayne, H.R.

    1979-01-01

    A non-integer Bessel uniform approximation has been used to calculate transition probabilities for collinear atom-oscillator collisions. The collision systems used are a harmonic oscillator interacting via a Lennard-Jones potential and a Morse oscillator interacting via an exponential potential. Both classically allowed and classically forbidden transitions have been treated. The order of the Bessel function is chosen by a physical argument that makes use of information contained in the final-action initial-angle plot. Limitations of this procedure are discussed. It is shown that the non-integer Bessel approximation is accurate for elastic 0 → 0 collisions at high collision energies, where the integer Bessel approximation is inaccurate or inapplicable. (author)

  19. The role of eikonal approximation in the infrared domain

    International Nuclear Information System (INIS)

    Banerjee, H.; Sharma, S.K.; Mallik, S.

    1977-01-01

    It is shown that the infrared limit of amplitudes for ladder diagrams in spinor electrodynamics is given by eikonal approximation correctly upto terms of relative O(/t/sup(1/2)/ssup(1/2)) only if one also makes the 'small-angle assumption'. Leading corrections to eikonal amplitude contain s-channel poles which do not have Coulomb analogue. For fixed angle scattering the leading infrared contribution of ladder diagrams is obtained. (Auth.)

  20. Evaluation of asbestos exposure within the automotive repair industry: a study involving removal of asbestos-containing body sealants and drive clutch replacement.

    Science.gov (United States)

    Blake, Charles L; Dotson, G Scott; Harbison, Raymond D

    2008-12-01

    Two independent assessments were performed of airborne asbestos concentrations generated during automotive repair work on vintage vehicles . The first involved removal of asbestos-containing seam sealant, and the second involved servicing of a drive clutch. Despite the relatively high concentrations (5.6-28%) of chrysotile fibers detected within bulk samples of seam sealant, the average asbestos concentration for personal breathing zone (PBZ) samples during seam sealant removal was 0.006 f/cc (fibers/cubic centimeter of air). Many other air samples contained asbestos at or below the analytical limit of detection (LOD). Pneumatic chiseling of the sealant material during removal resulted in 69% of area air samples containing asbestos. Use of this impact tool liberated more asbestos than hand scraping. Asbestos fibers were only detected in air samples collected during the installation of a replacement clutch. The highest asbestos corrected airborne fiber concentration observed during clutch installation was 0.0028 f/cc. This value is approximately 100 times lower than Occupational Safety and Health Administration's (OSHA) permissible exposure limit (PEL) of 0.1f/cc. The airborne asbestos concentrations observed during the servicing of vintage vehicles with asbestos-containing seam sealant and clutches are comparable to levels reported for repair work involving brake components and gaskets.

  1. Approximate symmetries of Hamiltonians

    Science.gov (United States)

    Chubb, Christopher T.; Flammia, Steven T.

    2017-08-01

    We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.

  2. Determination of ethanol in acetic acid-containing samples by a biosensor based on immobilized Gluconobacter cells

    Directory of Open Access Journals (Sweden)

    VALENTINA A. KRATASYUK

    2012-11-01

    Full Text Available Reshetilov AN, Kitova AE, Arkhipova AV, Kratasyuk VA, Rai MK. 2012. Determination of ethanol in acetic acid containing samples by a biosensor based on immobilized Gluconobacter cells. Nusantara Bioscience 4: 97-100. A biosensor based on Gluconobacter oxydans VKM B-1280 bacteria was used for detection of ethanol in the presence of acetic acid. It was assumed that this assay could be useful for controlling acetic acid production from ethanol and determining the final stage of the fermentation process. Measurements were made using a Clark electrode-based amperometric biosensor. The effect of pH of the medium on the sensor signal and the analytical parameters of the sensor (detection range, sensitivity were investigated. The residual content of ethanol in acetic acid samples was analyzed. The results of the study are important for monitoring the acetic acid production process, as they represent a method of tracking its stages

  3. Serum sample containing endogenous antibodies interfering with multiple hormone immunoassays. Laboratory strategies to detect interference

    Directory of Open Access Journals (Sweden)

    Elena García-González

    2016-04-01

    Full Text Available Objectives: Endogenous antibodies (EA may interfere with immunoassays, causing erroneous results for hormone analyses. As (in most cases this interference arises from the assay format and most immunoassays, even from different manufacturers, are constructed in a similar way, it is possible for a single type of EA to interfere with different immunoassays. Here we describe the case of a patient whose serum sample contains EA that interfere several hormones tests. We also discuss the strategies deployed to detect interference. Subjects and methods: Over a period of four years, a 30-year-old man was subjected to a plethora of laboratory and imaging diagnostic procedures as a consequence of elevated hormone results, mainly of pituitary origin, which did not correlate with the overall clinical picture. Results: Once analytical interference was suspected, the best laboratory approaches to investigate it were sample reanalysis on an alternative platform and sample incubation with antibody blocking tubes. Construction of an in-house ‘nonsense’ sandwich assay was also a valuable strategy to confirm interference. In contrast, serial sample dilutions were of no value in our case, while polyethylene glycol (PEG precipitation gave inconclusive results, probably due to the use of inappropriate PEG concentrations for several of the tests assayed. Conclusions: Clinicians and laboratorians must be aware of the drawbacks of immunometric assays, and alert to the possibility of EA interference when results do not fit the clinical pattern. Keywords: Endogenous antibodies, Immunoassay, Interference, Pituitary hormones, Case report

  4. Behaviour of prestressed concrete containment structures

    International Nuclear Information System (INIS)

    MacGregor, J.G.; Murray, D.W.; Simmonds, S.H.

    1980-05-01

    The most significant finds from a study to assess the response of prestressed concrete secondary containment structures for nuclear reactors under the influence of high internal overpressures are presented. A method of analysis is described for determining the strains and deflections including effects of inelastic behaviour at various points in the structure resulting from increasing internal pressures. Experimentally derived relationships between the strains and crack spacing, crack width and leakage rate are given. These procedures were applied to the Gentilly-2 containment building to obtain the following results: (1) The first through-the-wall cracks would occur in the dome at 48 psi or 2.3 times the proof test pressure. (2) At this pressure leakage would begin and would increase exponentially as the pressure increases such that at 93% of the predicted failure load the calculated leakage rate would be approximately equal to the volume of the containment each second. (3) Assuming the pressurizing medium could be supplied sufficiently rapidly, failure would occur due to rupture of the horizontal tendons at approximately 77 psi. (author)

  5. Detection of cracks in shafts with the Approximated Entropy algorithm

    Science.gov (United States)

    Sampaio, Diego Luchesi; Nicoletti, Rodrigo

    2016-05-01

    The Approximate Entropy is a statistical calculus used primarily in the fields of Medicine, Biology, and Telecommunication for classifying and identifying complex signal data. In this work, an Approximate Entropy algorithm is used to detect cracks in a rotating shaft. The signals of the cracked shaft are obtained from numerical simulations of a de Laval rotor with breathing cracks modelled by the Fracture Mechanics. In this case, one analysed the vertical displacements of the rotor during run-up transients. The results show the feasibility of detecting cracks from 5% depth, irrespective of the unbalance of the rotating system and crack orientation in the shaft. The results also show that the algorithm can differentiate the occurrence of crack only, misalignment only, and crack + misalignment in the system. However, the algorithm is sensitive to intrinsic parameters p (number of data points in a sample vector) and f (fraction of the standard deviation that defines the minimum distance between two sample vectors), and good results are only obtained by appropriately choosing their values according to the sampling rate of the signal.

  6. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  7. Selection of 3013 Containers for Field Surveillance

    International Nuclear Information System (INIS)

    Larry Peppers; Elizabeth Kelly; James McClard; Gary Friday; Theodore Venetz; Jerry Stakebade

    2007-01-01

    This report revises and combines three earlier reports dealing with the binning, statistical sampling, and sample selection of 3013 containers for field surveillance. It includes changes to the binning specification resulting from completion of the Savannah River Site packaging campaign and new information from the shelf-life program and field surveillance activities. The revised bin assignments result in changes to the random sample specification. These changes are necessary to meet the statistical requirements of the surveillance program. This report will be reviewed regularly and revised as needed. Section 1 of this report summarizes the results of an extensive effort to assign all of the current and projected 3013 containers in the Department of Energy (DOE) inventory to one of three bins (Innocuous, Pressure and Corrosion, or Pressure) based on potential failure mechanisms. Grouping containers into bins provides a framework to make a statistical selection of individual containers from the entire population for destructive and nondestructive field surveillance. The binning process consisted of three main steps. First, the packaged containers were binned using information in the Integrated Surveillance Program database and a decision tree. The second task was to assign those containers that could not be binned using the decision tree to a specific bin using container-by-container engineering review. The final task was to evaluate containers not yet packaged and assign them to bins using process knowledge. The technical basis for the decisions made during the binning process is included in Section 1. A composite decision tree and a summary table show all of the containers projected to be in the DOE inventory at the conclusion of packaging at all sites. Decision trees that provide an overview of the binning process and logic are included for each site. Section 2 of this report describes the approach to the statistical selection of containers for surveillance and

  8. Approximations to the non-adiabatic particle response in toroidal geometry

    International Nuclear Information System (INIS)

    Schep, T.J.; Braams, B.J.

    1981-08-01

    The non-adiabatic part of the particle response to low-frequency electromagnetic modes with long parallel wavelengths is discussed. Analytic approximations to the kernels of the integrals that relate the amplitudes of the perturbed potentials to the non-adiabatic part of the perturbed density in an axisymmetric toroidal configuration are presented and the results are compared with numerical calculations. It is shown that both in the plane slab and in toroidal geometry the kernel contains a logarithmic singularity. This singularity is associated with particles with vanishing parallel velocity so that, in toroidal geometry, it is related with the behaviour of trapped particles near their turning points. In contrast to the plane slab, in toroidal geometry this logarithmic singularity is mainly real and associated with non-resonant particles. Apart from this logarithmic term, the kernel contains a complex regular part arising from resonant as well as from non-resonant particles. The analytic approximations that will be presented make the dispersion relation of drift-type modes in toroidal geometry amenable to analytic as well as to simpler numerical calculation of the growth rate and of the spatial mode structure

  9. Status of the CONTAIN computer code for LWR containment analysis

    International Nuclear Information System (INIS)

    Bergeron, K.D.; Murata, K.K.; Rexroth, P.E.; Clauser, M.J.; Senglaub, M.E.; Sciacca, F.W.; Trebilcock, W.

    1983-01-01

    The current status of the CONTAIN code for LWR safety analysis is reviewed. Three example calculations are discussed as illustrations of the code's capabilities: (1) a demonstration of the spray model in a realistic PWR problem, and a comparison with CONTEMPT results; (2) a comparison of CONTAIN results for a major aerosol experiment against experimental results and predictions of the HAARM aerosol code; and (3) an LWR sample problem, involving a TMLB' sequence for the Zion reactor containment

  10. Status of the CONTAIN computer code for LWR containment analysis

    International Nuclear Information System (INIS)

    Bergeron, K.D.; Murata, K.K.; Rexroth, P.E.; Clauser, M.J.; Senglaub, M.E.; Sciacca, F.W.; Trebilcock, W.

    1982-01-01

    The current status of the CONTAIN code for LWR safety analysis is reviewed. Three example calculations are discussed as illustrations of the code's capabilities: (1) a demonstration of the spray model in a realistic PWR problem, and a comparison with CONTEMPT results; (2) a comparison of CONTAIN results for a major aerosol experiment against experimental results and predictions of the HAARM aerosol code; and (3) an LWR sample problem, involving a TMLB' sequence for the Zion reactor containment

  11. Recoil Reactions in Neutron-Activation Analysis. The Szilard-Chalmers Effect Applied in the Analysis of Biological Samples; II. Transfer of Activities from Container Material to Sample

    Energy Technology Data Exchange (ETDEWEB)

    Brune, D

    1965-01-15

    The present investigation consists of two parts. The one part concerns the application of the Szilard-Chalmers effect in the separation of activities from neutron-irradiated biological material. The nuclides As-76, Au-198, Br-82, Ca-47, Cd-115, Cl-38, Co-60, Cr-51, Cs-134, Cu-64, Fe-59, Mg-27, Mo-99, Na-24, P-32, Rb-86, Se-75 and Zn-65 were extracted from either liver tissue, whole blood or muscle tissue. The extractions were made in water, 0.1 N HCl, 1 N HCl or conc. HCl respectively. The nuclides belonging to the alkali metals together with Br and Cl, were found present in the water and hydrochloric extracts to 96 per cent or more. In the conc. HCl extracts, the greater part of the nuclides were recovered to 90 per cent or more. The enrichment of the different nuclides obtained in the Szilard-Chalmers process was investigated as follows. After extraction of the nuclides from the irradiated material the solution obtained was divided into two parts, one of which was reactivated. The specific activities of the nuclides in the two solutions were then compared, thus giving the enrichment factor In one case, the residue of organic material after extraction was reactivated and the activity compared to the initial one. The effect of dilution together with the application of short irradiation periods favouring higher yield was investigated in the separation of Fe-59 from whole blood samples irradiated in frozen conditions. The other part of the investigation concerns an estimation of the amounts of the activities originating from polyethylene and quartz containers transferred to container surface due to the recoil effect in the thermal neutron-capture process, thus causing contamination of the sample. The universal range-energy relationship given by Lindhard and Scharff has been applied in these calculations. As regards containers with impurities in the ppm region, the amounts of activities transferred owing to this effect were found to be quite negligible. However, when

  12. Leveraging Gaussian process approximations for rapid image overlay production

    CSIR Research Space (South Africa)

    Burke, Michael

    2017-10-01

    Full Text Available value, xs = argmax x∗ [ K (x∗, x∗) − K (x∗, x)K (x, x)−1K (x, x∗) ] . (10) Figure 2 illustrates this sampling strategy more clearly. This selec- tion process can be slow, but could be bootstrapped using Latin hypercube sampling [16]. 3 RESULTS Empirical... point - a 240 sample Gaussian process approximation takes roughly the same amount of time to compute as the full blanked overlay. GP 50 GP 100 GP 150 GP 200 GP 250 GP 300 GP 350 GP 400 Full Itti-Koch 0 2 4 6 8 10 Method R at in g Boxplot of storyboard...

  13. Simple transmission Raman measurements using a single multivariate model for analysis of pharmaceutical samples contained in capsules of different colors.

    Science.gov (United States)

    Lee, Yeojin; Kim, Jaejin; Lee, Sanguk; Woo, Young-Ah; Chung, Hoeil

    2012-01-30

    Direct transmission Raman measurements for analysis of pharmaceuticals in capsules are advantageous since they can be used to determine active pharmaceutical ingredient (API) concentrations in a non-destructive manner and with much less fluorescence background interference from the capsules themselves compared to conventional back-scattering measurements. If a single calibration model such as developed from spectra simply collected in glass vials could be used to determine API concentrations of samples contained in capsules of different colors rather than constructing individual models for each capsule color, the utility of transmission measurements would be further enhanced. To evaluate the feasibility, transmission Raman spectra of binary mixtures of ambroxol and lactose were collected in a glass vial and a partial least squares (PLS) model for the determination of ambroxol concentration was developed. Then, the model was directly applied to determine ambroxol concentrations of samples contained in capsules of 4 different colors (blue, green, white and yellow). Although the prediction performance was slightly degraded when the samples were placed in blue or green capsules, due to the presence of weak fluorescence, accurate determination of ambroxol was generally achieved in all cases. The prediction accuracy was also investigated when the thickness of the capsule was varied. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Constrained approximation of effective generators for multiscale stochastic reaction networks and application to conditioned path sampling

    Energy Technology Data Exchange (ETDEWEB)

    Cotter, Simon L., E-mail: simon.cotter@manchester.ac.uk

    2016-10-15

    Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allows us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.

  15. Approximating chiral quark models with linear σ-models

    International Nuclear Information System (INIS)

    Broniowski, Wojciech; Golli, Bojan

    2003-01-01

    We study the approximation of chiral quark models with simpler models, obtained via gradient expansion. The resulting Lagrangian of the type of the linear σ-model contains, at the lowest level of the gradient-expanded meson action, an additional term of the form ((1)/(2))A(σ∂ μ σ+π∂ μ π) 2 . We investigate the dynamical consequences of this term and its relevance to the phenomenology of the soliton models of the nucleon. It is found that the inclusion of the new term allows for a more efficient approximation of the underlying quark theory, especially in those cases where dynamics allows for a large deviation of the chiral fields from the chiral circle, such as in quark models with non-local regulators. This is of practical importance, since the σ-models with valence quarks only are technically much easier to treat and simpler to solve than the quark models with the full-fledged Dirac sea

  16. Containment performance evaluation of prestressed concrete containment vessels with fiber reinforcement

    Energy Technology Data Exchange (ETDEWEB)

    Choun, Young Sun; Park, Hyung Kui [Integrated Safety Assessment Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-12-15

    Fibers in concrete resist the growth of cracks and enhance the postcracking behavior of structures. The addition of fibers into a conventional reinforced concrete can improve the structural and functional performance of safety-related concrete structures in nuclear power plants. The influence of fibers on the ultimate internal pressure capacity of a prestressed concrete containment vessel (PCCV) was investigated through a comparison of the ultimate pressure capacities between conventional and fiber-reinforced PCCVs. Steel and polyamide fibers were used. The tension behaviors of conventional concrete and fiber-reinforced concrete specimens were investigated through uniaxial tension tests and their tension-stiffening models were obtained. For a PCCV reinforced with 1% volume hooked-end steel fiber, the ultimate pressure capacity increased by approximately 12% in comparison with that for a conventional PCCV. For a PCCV reinforced with 1.5% volume polyamide fiber, an increase of approximately 3% was estimated for the ultimate pressure capacity. The ultimate pressure capacity can be greatly improved by introducing steel and polyamide fibers in a conventional reinforced concrete. Steel fibers are more effective at enhancing the containment performance of a PCCV than polyamide fibers. The fiber reinforcement was shown to be more effective at a high pressure loading and a low prestress level.

  17. Containment performance evaluation of prestressed concrete containment vessels with fiber reinforcement

    International Nuclear Information System (INIS)

    Choun, Young Sun; Park, Hyung Kui

    2015-01-01

    Fibers in concrete resist the growth of cracks and enhance the postcracking behavior of structures. The addition of fibers into a conventional reinforced concrete can improve the structural and functional performance of safety-related concrete structures in nuclear power plants. The influence of fibers on the ultimate internal pressure capacity of a prestressed concrete containment vessel (PCCV) was investigated through a comparison of the ultimate pressure capacities between conventional and fiber-reinforced PCCVs. Steel and polyamide fibers were used. The tension behaviors of conventional concrete and fiber-reinforced concrete specimens were investigated through uniaxial tension tests and their tension-stiffening models were obtained. For a PCCV reinforced with 1% volume hooked-end steel fiber, the ultimate pressure capacity increased by approximately 12% in comparison with that for a conventional PCCV. For a PCCV reinforced with 1.5% volume polyamide fiber, an increase of approximately 3% was estimated for the ultimate pressure capacity. The ultimate pressure capacity can be greatly improved by introducing steel and polyamide fibers in a conventional reinforced concrete. Steel fibers are more effective at enhancing the containment performance of a PCCV than polyamide fibers. The fiber reinforcement was shown to be more effective at a high pressure loading and a low prestress level

  18. Finite element approximation to a model problem of transonic flow

    International Nuclear Information System (INIS)

    Tangmanee, S.

    1986-12-01

    A model problem of transonic flow ''the Tricomi equation'' in Ω is contained in IR 2 bounded by the rectangular-curve boundary is posed in the form of symmetric positive differential equations. The finite element method is then applied. When the triangulation of Ω-bar is made of quadrilaterals and the approximation space is the Lagrange polynomial, we get the error estimates. 14 refs, 1 fig

  19. Approximating distributions from moments

    Science.gov (United States)

    Pawula, R. F.

    1987-11-01

    A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.

  20. Identification of heparin samples that contain impurities or contaminants by chemometric pattern recognition analysis of proton NMR spectral data

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Qingda [University of Medicine and Dentistry of New Jersey, Department of Pharmacology, Robert Wood Johnson Medical School, Piscataway, NJ (United States); Snowdon, Inc., Monmouth Junction, NJ (United States); University of Medicine and Dentistry of New Jersey, Department of Health Informatics, School of Health Related Professions, Newark, NJ (United States); Keire, David A.; Buhse, Lucinda F.; Trehy, Michael L. [Food and Drug Administration, CDER, Division of Pharmaceutical Analysis, St. Louis, MO (United States); Wood, Richard D. [Snowdon, Inc., Monmouth Junction, NJ (United States); Mital, Dinesh P.; Haque, Syed; Srinivasan, Shankar [University of Medicine and Dentistry of New Jersey, Department of Health Informatics, School of Health Related Professions, Newark, NJ (United States); Moore, Christine M.V.; Nasr, Moheb; Al-Hakim, Ali [Food and Drug Administration, CDER, Office of New Drug Quality Assessment, Silver Spring, MD (United States); Welsh, William J. [University of Medicine and Dentistry of New Jersey, Department of Pharmacology, Robert Wood Johnson Medical School, Piscataway, NJ (United States)

    2011-08-15

    Chemometric analysis of a set of one-dimensional (1D) {sup 1}H nuclear magnetic resonance (NMR) spectral data for heparin sodium active pharmaceutical ingredient (API) samples was employed to distinguish USP-grade heparin samples from those containing oversulfated chondroitin sulfate (OSCS) contaminant and/or unacceptable levels of dermatan sulfate (DS) impurity. Three chemometric pattern recognition approaches were implemented: classification and regression tree (CART), artificial neural network (ANN), and support vector machine (SVM). Heparin sodium samples from various manufacturers were analyzed in 2008 and 2009 by 1D {sup 1}H NMR, strong anion-exchange high-performance liquid chromatography, and percent galactosamine in total hexosamine tests. Based on these data, the samples were divided into three groups: Heparin, DS {<=} 1.0% and OSCS = 0%; DS, DS > 1.0% and OSCS = 0%; and OSCS, OSCS > 0% with any content of DS. Three data sets corresponding to different chemical shift regions (1.95-2.20, 3.10-5.70, and 1.95-5.70 ppm) were evaluated. While all three chemometric approaches were able to effectively model the data in the 1.95-2.20 ppm region, SVM was found to substantially outperform CART and ANN for data in the 3.10-5.70 ppm region in terms of classification success rate. A 100% prediction rate was frequently achieved for discrimination between heparin and OSCS samples. The majority of classification errors between heparin and DS involved cases where the DS content was close to the 1.0% DS borderline between the two classes. When these borderline samples were removed, nearly perfect classification results were attained. Satisfactory results were achieved when the resulting models were challenged by test samples containing blends of heparin APIs spiked with non-, partially, or fully oversulfated chondroitin sulfate A, heparan sulfate, or DS at the 1.0%, 5.0%, and 10.0% (w/w) levels. This study demonstrated that the combination of 1D {sup 1}H NMR spectroscopy

  1. General Rytov approximation.

    Science.gov (United States)

    Potvin, Guy

    2015-10-01

    We examine how the Rytov approximation describing log-amplitude and phase fluctuations of a wave propagating through weak uniform turbulence can be generalized to the case of turbulence with a large-scale nonuniform component. We show how the large-scale refractive index field creates Fermat rays using the path integral formulation for paraxial propagation. We then show how the second-order derivatives of the Fermat ray action affect the Rytov approximation, and we discuss how a numerical algorithm would model the general Rytov approximation.

  2. Introduction to methods of approximation in physics and astronomy

    CERN Document Server

    van Putten, Maurice H P M

    2017-01-01

    This textbook provides students with a solid introduction to the techniques of approximation commonly used in data analysis across physics and astronomy. The choice of methods included is based on their usefulness and educational value, their applicability to a broad range of problems and their utility in highlighting key mathematical concepts. Modern astronomy reveals an evolving universe rife with transient sources, mostly discovered - few predicted - in multi-wavelength observations. Our window of observations now includes electromagnetic radiation, gravitational waves and neutrinos. For the practicing astronomer, these are highly interdisciplinary developments that pose a novel challenge to be well-versed in astroparticle physics and data-analysis. The book is organized to be largely self-contained, starting from basic concepts and techniques in the formulation of problems and methods of approximation commonly used in computation and numerical analysis. This includes root finding, integration, signal dete...

  3. Production of vegetation samples containing radionuclides gamma emitters to attend the interlaboratory programs; Producao de amostras de vegetacao contendo radionuclideos emissores gama para participar de programas interlaboratoriais

    Energy Technology Data Exchange (ETDEWEB)

    Souza, Poliana Santos de

    2016-07-01

    The production of environmental samples such as soil, sediment, water and vegetation with radionuclides for intercomparison tests is a very important contribution to environmental monitoring. Laboratories that carry out such monitoring need to demonstrate that their results are reliable. The IRD National Intercomparison Program (PNI) produces and distributes environmental samples containing radionuclides used to check the laboratories performance. This work demonstrates the feasibility of producing vegetation (grass) samples containing {sup 60}Co, {sup 65}Zn, {sup 134}Cs, and {sup 137}Cs by the spike sample method for the PNI. The preparation and the statistical tests followed the ISO guides 34 and 35 recommendations. The grass samples were dried, ground and passed through a sieve of 250 μm. 500 g of vegetation was treated in each procedure. Samples were treated by two different procedures:1) homogenizing of the radioactive solution containing vegetation by hand and drying in an oven and 2) homogenizing of the radioactive solution containing the vegetation in a rotatory evaporator and drying in an oven. The theoretical activity concentration of the radionuclides in the grass had a range of 593 Bq/kg to 683 Bq/kg. After gamma spectrometry analysis the results of both procedures were compared as accuracy, precision, homogeneity and stability. The accuracy, precision and short time stability from both methods were similar but the homogeneity test of the evaporation method was not approved for the radionuclides {sup 60}Co and {sup 134}Cs. Based on comparisons between procedures was chosen the manual agitation procedure for the grass sample for the PNI. The accuracy of the procedure, represented by the uncertainty and based on theoretical value had a range between -1.1 and 5.1% and the precision between 0.6 a 6.5 %. This result show is the procedure chosen for the production of grass samples for PNI. (author)

  4. Variational P1 approximations of general-geometry multigroup transport problems

    International Nuclear Information System (INIS)

    Rulko, R.P.; Tomasevic, D.; Larsen, E.W.

    1995-01-01

    A variational approximation is developed for general-geometry multigroup transport problems with arbitrary anisotropic scattering. The variational principle is based on a functional that approximates a reaction rate in a subdomain of the system. In principle, approximations that result from this functional ''optimally'' determine such reaction rates. The functional contains an arbitrary parameter α and requires the approximate solutions of a forward and an adjoint transport problem. If the basis functions for the forward and adjoint solutions are chosen to be linear functions of the angular variable Ω, the functional yields the familiar multigroup P 1 equations for all values of α. However, the boundary conditions that result from the functional depend on α. In particular, for problems with vacuum boundaries, one obtains the conventional mixed boundary condition, but with an extrapolation distance that depends continuously on α. The choice α = 0 yields a generalization of boundary conditions derived earlier by Federighi and Pomraning for a more limited class of problems. The choice α = 1 yields a generalization of boundary conditions derived previously by Davis for monoenergetic problems. Other boundary conditions are obtained by choosing different values of α. The authors discuss this indeterminancy of α in conjunction with numerical experiments

  5. Approximal sealings on lesions in neighbouring teeth requiring operative treatment: an in vitro study.

    Science.gov (United States)

    Cartagena, Alvaro; Bakhshandeh, Azam; Ekstrand, Kim Rud

    2018-02-07

    With this in vitro study we aimed to assess the possibility of precise application of sealant on accessible artificial white spot lesions (WSL) on approximal surfaces next to a tooth surface under operative treatment. A secondary aim was to evaluate whether the use of magnifying glasses improved the application precision. Fifty-six extracted premolars were selected, approximal WSL lesions were created with 15% HCl gel and standardized photographs were taken. The premolars were mounted in plaster-models in contact with a neighbouring molar with Class II/I-II restoration (Sample 1) or approximal, cavitated dentin lesion (Sample 2). The restorations or the lesion were removed, and Clinpro Sealant was placed over the WSL. Magnifying glasses were used when sealing half the study material. The sealed premolar was removed from the plaster-model and photographed. Adobe Photoshop was used to measure the size of WSL and sealed area. The degree of match between the areas was determined in Photoshop. Interclass agreement for WSL, sealed, and matched areas were found as excellent (κ = 0.98-0.99). The sealant covered 48-100% of the WSL-area (median = 93%) in Sample 1 and 68-100% of the WSL-area (median = 95%) in Sample 2. No statistical differences were observed concerning uncovered proportions of the WSL-area between groups with and without using magnifying glasses (p values ≥ .19). However, overextended sealed areas were more pronounced when magnification was used (p = .01). The precision did not differ between the samples (p = .31). It was possible to seal accessible approximal lesions with high precision. Use of magnifying glasses did not improve the precision.

  6. Thermal probe design for Europa sample acquisition

    Science.gov (United States)

    Horne, Mera F.

    2018-01-01

    The planned lander missions to the surface of Europa will access samples from the subsurface of the ice in a search for signs of life. A small thermal drill (probe) is proposed to meet the sample requirement of the Science Definition Team's (SDT) report for the Europa mission. The probe is 2 cm in diameter and 16 cm in length and is designed to access the subsurface to 10 cm deep and to collect five ice samples of 7 cm3 each, approximately. The energy required to penetrate the top 10 cm of ice in a vacuum is 26 Wh, approximately, and to melt 7 cm3 of ice is 1.2 Wh, approximately. The requirement stated in the SDT report of collecting samples from five different sites can be accommodated with repeated use of the same thermal drill. For smaller sample sizes, a smaller probe of 1.0 cm in diameter with the same length of 16 cm could be utilized that would require approximately 6.4 Wh to penetrate the top 10 cm of ice, and 0.02 Wh to collect 0.1 g of sample. The thermal drill has the advantage of simplicity of design and operations and the ability to penetrate ice over a range of densities and hardness while maintaining sample integrity.

  7. A Poisson process approximation for generalized K-5 confidence regions

    Science.gov (United States)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  8. Theory of inelastic electron tunneling from a localized spin in the impulsive approximation.

    Science.gov (United States)

    Persson, Mats

    2009-07-31

    A simple expression for the conductance steps in inelastic electron tunneling from spin excitations in a single magnetic atom adsorbed on a nonmagnetic metal surface is derived. The inelastic coupling between the tunneling electron and the spin is via the exchange coupling and is treated in an impulsive approximation using the Tersoff-Hamann approximation for the tunneling between the tip and the sample.

  9. Novel sample preparation method for surfactant containing suppositories: effect of micelle formation on drug recovery.

    Science.gov (United States)

    Kalmár, Éva; Ueno, Konomi; Forgó, Péter; Szakonyi, Gerda; Dombi, György

    2013-09-01

    Rectal drug delivery is currently at the focus of attention. Surfactants promote drug release from the suppository bases and enhance the formulation properties. The aim of our work was to develop a sample preparation method for HPLC analysis for a suppository base containing 95% hard fat, 2.5% Tween 20 and 2.5% Tween 60. A conventional sample preparation method did not provide successful results as the recovery of the drug failed to fulfil the validation criterion 95-105%. This was caused by the non-ionic surfactants in the suppository base incorporating some of the drug, preventing its release. As guidance for the formulation from an analytical aspect, we suggest a well defined surfactant content based on the turbidimetric determination of the CMC (critical micelle formation concentration) in the applied methanol-water solvent. Our CMC data correlate well with the results of previous studies. As regards the sample preparation procedure, a study was performed of the effects of ionic strength and pH on the drug recovery with the avoidance of degradation of the drug during the procedure. Aminophenazone and paracetamol were used as model drugs. The optimum conditions for drug release from the molten suppository base were found to be 100 mM NaCl, 20-40 mM NaOH and a 30 min ultrasonic treatment of the final sample solution. As these conditions could cause the degradation of the drugs in the solution, this was followed by NMR spectroscopy, and the results indicated that degradation did not take place. The determined CMCs were 0.08 mM for Tween 20, 0.06 mM for Tween 60 and 0.04 mM for a combined Tween 20, Tween 60 system. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Quantitative microwave impedance microscopy with effective medium approximations

    Directory of Open Access Journals (Sweden)

    T. S. Jones

    2017-02-01

    Full Text Available Microwave impedance microscopy (MIM is a scanning probe technique to measure local changes in tip-sample admittance. The imaginary part of the reported change is calibrated with finite element simulations and physical measurements of a standard capacitive sample, and thereafter the output ΔY is given a reference value in siemens. Simulations also provide a means of extracting sample conductivity and permittivity from admittance, a procedure verified by comparing the estimated permittivity of polytetrafluoroethlyene (PTFE to the accepted value. Simulations published by others have investigated the tip-sample system for permittivity at a given conductivity, or conversely conductivity and a given permittivity; here we supply the full behavior for multiple values of both parameters. Finally, the well-known effective medium approximation of Bruggeman is considered as a means of estimating the volume fractions of the constituents in inhomogeneous two-phase systems. Specifically, we consider the estimation of porosity in carbide-derived carbon, a nanostructured material known for its use in energy storage devices.

  11. A thermodynamic approximation of the groundstate of antiferromagnetic Heisenberg spin-1/2 lattices

    NARCIS (Netherlands)

    Tielen, G.I.; Iske, P.L.; Caspers, W.J.; Caspers, W.J.

    1991-01-01

    The exact ground state of finite Heisenberg spin−1/2 lattices isstudied. The coefficients of the so-called Ising configurations contributing to the ground state are approximated by Boltzmann-like expressions. These expressions contain a parameter that may be related to an inverse temperature.

  12. Dissolution Rates of Allophane, FE-Containing Allophane, and Hisingerite and Implications for Gale Crater, Mars

    Science.gov (United States)

    Ralston, S. J.; Hausrath, E. M.; Tschauner, O.; Rampe, E. B.; Christoffersen, R.

    2018-01-01

    Investigations with the CheMin Xray Diffractometer (XRD) onboard the Curiosity rover in Gale Crater demonstrate that all rock and soil samples measured to date contain approximately 15-70 weight percentage X-ray amorphous materials. The diffuse scattering hump from the X-ray amorphous materials in CheMin XRD patterns can be fit with a combination of allophane, ferrihydrite, and rhyolitic and basaltic glass. Because of the iron-rich nature of Mars' surface, Fe-rich poorly-crystalline phases, such as hisingerite, may be present in addition to allophane.

  13. Functional approximations to posterior densities: a neural network approach to efficient sampling

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart); J.F. Kaashoek (Johan); H.K. van Dijk (Herman)

    2002-01-01

    textabstractThe performance of Monte Carlo integration methods like importance sampling or Markov Chain Monte Carlo procedures greatly depends on the choice of the importance or candidate density. Usually, such a density has to be "close" to the target density in order to yield numerically accurate

  14. Direct containment heating models in the CONTAIN code

    International Nuclear Information System (INIS)

    Washington, K.E.; Williams, D.C.

    1995-08-01

    The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale

  15. Direct containment heating models in the CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Washington, K.E.; Williams, D.C.

    1995-08-01

    The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale.

  16. Choosing a suitable sample size in descriptive sampling

    International Nuclear Information System (INIS)

    Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon

    2010-01-01

    Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency

  17. Field data analysis of active chlorine-containing stormwater samples.

    Science.gov (United States)

    Zhang, Qianyi; Gaafar, Mohamed; Yang, Rong-Cai; Ding, Chen; Davies, Evan G R; Bolton, James R; Liu, Yang

    2018-01-15

    Many municipalities in Canada and all over the world use chloramination for drinking water secondary disinfection to avoid DBPs formation from conventional chlorination. However, the long-lasting monochloramine (NH 2 Cl) disinfectant can pose a significant risk to aquatic life through its introduction into municipal storm sewer systems and thus fresh water sources by residential, commercial, and industrial water uses. To establish general total active chlorine (TAC) concentrations in discharges from storm sewers, the TAC concentration was measured in stormwater samples in Edmonton, Alberta, Canada, during the summers of 2015 and 2016 under both dry and wet weather conditions. The field-sampling results showed TAC concentration variations from 0.02 to 0.77 mg/L in summer 2015, which exceeds the discharge effluent limit of 0.02 mg/L. As compared to 2015, the TAC concentrations were significantly lower during the summer 2016 (0-0.24 mg/L), for which it is believed that the higher precipitation during summer 2016 reduced outdoor tap water uses. Since many other cities also use chloramines as disinfectants for drinking water disinfection, the TAC analysis from Edmonton may prove useful for other regions as well. Other physicochemical and biological characteristics of stormwater and storm sewer biofilm samples were also analyzed, and no significant difference was found during these two years. Higher density of AOB and NOB detected in the storm sewer biofilm of residential areas - as compared with other areas - generally correlated to high concentrations of ammonium and nitrite in this region in both of the two years, and they may have contributed to the TAC decay in the storm sewers. The NH 2 Cl decay laboratory experiments illustrate that dissolved organic carbon (DOC) concentration is the dominant factor in determining the NH 2 Cl decay rate in stormwater samples. The high DOC concentrations detected from a downstream industrial sampling location may contribute to a

  18. Health risk assessment of drinking arsenic-containing groundwater in Hasilpur, Pakistan: effect of sampling area, depth, and source.

    Science.gov (United States)

    Tabassum, Riaz Ahmad; Shahid, Muhammad; Dumat, Camille; Niazi, Nabeel Khan; Khalid, Sana; Shah, Noor Samad; Imran, Muhammad; Khalid, Samina

    2018-02-10

    Currently, several news channels and research publications have highlighted the dilemma of arsenic (As)-contaminated groundwater in Pakistan. However, there is lack of data regarding groundwater As content of various areas in Pakistan. The present study evaluated As contamination and associated health risks in previously unexplored groundwater of Hasilpur-Pakistan. Total of 61 groundwater samples were collected from different areas (rural and urban), sources (electric pump, hand pump, and tubewell) and depths (35-430 ft or 11-131 m). The water samples were analyzed for As level and other parameters such as pH, electrical conductivity, total dissolved solids, cations, and anions. It was found that 41% (25 out of 61) water samples contained As (≥ 5 μg/L). Out of 25 As-contaminated water samples, 13 water samples exceeded the permissible level of WHO (10 μg/L). High As contents have been found in tubewell samples and at high sampling depths (> 300 ft). The major As-contaminated groundwater in Hasilpur is found in urban areas. Furthermore, health risk and cancer risk due to As contamination were also assessed with respect to average daily dose (ADD), hazard quotient (HQ), and carcinogenic risk (CR). The values of HQ and CR of As in Hasilpur were up to 58 and 0.00231, respectively. Multivariate analysis revealed a positive correlation between groundwater As contents, pH, and depth in Hasilpur. The current study proposed the proper monitoring and management of well water in Hasilpur to minimize the As-associated health hazards.

  19. Suitability of different containers for the sampling and storage of biogas and biomethane for the determination of the trace-level impurities--A review.

    Science.gov (United States)

    Arrhenius, Karine; Brown, Andrew S; van der Veen, Adriaan M H

    2016-01-01

    The traceable and accurate measurement of biogas impurities is essential in order to robustly assess compliance with the specifications for biomethane being developed by CEN/TC408. An essential part of any procedure aiming to determinate the content of impurities is the sampling and the transfer of the sample to the laboratory. Key issues are the suitability of the sample container and minimising the losses of impurities during the sampling and analysis process. In this paper, we review the state-of-the-art in biogas sampling with the focus on trace impurities. Most of the vessel suitability studies reviewed focused on raw biogas. Many parameters need to be studied when assessing the suitability of vessels for sampling and storage, among them, permeation through the walls, leaks through the valves or physical leaks, sorption losses and adsorption effects to the vessel walls, chemical reactions and the expected initial concentration level. The majority of these studies looked at siloxanes, for which sampling bags, canisters, impingers and sorbents have been reported to be fit-for-purpose in most cases, albeit with some limitations. We conclude that the optimum method requires a combination of different vessels to cover the wide range of impurities commonly found in biogas, which have a wide range of boiling points, polarities, water solubilities, and reactivities. The effects from all the parts of the sampling line must be considered and precautions must be undertaken to minimize these effects. More practical suitability tests, preferably using traceable reference gas mixtures, are needed to understand the influence of the containers and the sampling line on sample properties and to reduce the uncertainty of the measurement. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Isolation and characterization of human apolipoprotein M-containing lipoproteins

    DEFF Research Database (Denmark)

    Christoffersen, Christina; Nielsen, Lars Bo; Axler, Olof

    2006-01-01

    Apolipoprotein M (apoM) is a novel apolipoprotein with unknown function. In this study, we established a method for isolating apoM-containing lipoproteins and studied their composition and the effect of apoM on HDL function. ApoM-containing lipoproteins were isolated from human plasma...... with immunoaffinity chromatography and compared with lipoproteins lacking apoM. The apoM-containing lipoproteins were predominantly of HDL size; approximately 5% of the total HDL population contained apoM. Mass spectrometry showed that the apoM-containing lipoproteins also contained apoJ, apoA-I, apoA-II, apoC-I, apo...

  1. Approximation of rejective sampling inclusion probabilities and application to high order correlations

    NARCIS (Netherlands)

    Boistard, H.; Lopuhää, H.P.; Ruiz-Gazen, A.

    2012-01-01

    This paper is devoted to rejective sampling. We provide an expansion of joint inclusion probabilities of any order in terms of the inclusion probabilities of order one, extending previous results by Hájek (1964) and Hájek (1981) and making the remainder term more precise. Following Hájek (1981), the

  2. Integration of differential equations by the pseudo-linear (PL) approximation

    International Nuclear Information System (INIS)

    Bonalumi, Riccardo A.

    1998-01-01

    A new method of integrating differential equations was originated with the technique of approximately calculating the integrals called the pseudo-linear (PL) procedure: this method is A-stable. This article contains the following examples: 1st order ordinary differential equations (ODEs), 2nd order linear ODEs, stiff system of ODEs (neutron kinetics), one-dimensional parabolic (diffusion) partial differential equations. In this latter case, this PL method coincides with the Crank-Nicholson method

  3. Detecting Change-Point via Saddlepoint Approximations

    Institute of Scientific and Technical Information of China (English)

    Zhaoyuan LI; Maozai TIAN

    2017-01-01

    It's well-known that change-point problem is an important part of model statistical analysis.Most of the existing methods are not robust to criteria of the evaluation of change-point problem.In this article,we consider "mean-shift" problem in change-point studies.A quantile test of single quantile is proposed based on saddlepoint approximation method.In order to utilize the information at different quantile of the sequence,we further construct a "composite quantile test" to calculate the probability of every location of the sequence to be a change-point.The location of change-point can be pinpointed rather than estimated within a interval.The proposed tests make no assumptions about the functional forms of the sequence distribution and work sensitively on both large and small size samples,the case of change-point in the tails,and multiple change-points situation.The good performances of the tests are confirmed by simulations and real data analysis.The saddlepoint approximation based distribution of the test statistic that is developed in the paper is of independent interest and appealing.This finding may be of independent interest to the readers in this research area.

  4. Reliable Approximation of Long Relaxation Timescales in Molecular Dynamics

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2017-07-01

    Full Text Available Many interesting rare events in molecular systems, like ligand association, protein folding or conformational changes, occur on timescales that often are not accessible by direct numerical simulation. Therefore, rare event approximation approaches like interface sampling, Markov state model building, or advanced reaction coordinate-based free energy estimation have attracted huge attention recently. In this article we analyze the reliability of such approaches. How precise is an estimate of long relaxation timescales of molecular systems resulting from various forms of rare event approximation methods? Our results give a theoretical answer to this question by relating it with the transfer operator approach to molecular dynamics. By doing so we also allow for understanding deep connections between the different approaches.

  5. Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.

    Science.gov (United States)

    Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian

    2018-05-23

    Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.

  6. Analyzing the errors of DFT approximations for compressed water systems

    International Nuclear Information System (INIS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-01-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm 3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE h ≃ 15 meV/monomer for the liquid and the

  7. CONTRIBUTIONS TO RATIONAL APPROXIMATION,

    Science.gov (United States)

    Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)

  8. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2010-05-01

    Full Text Available Abstract Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA, polynomial approximation (PA, Gauss-Hermite integration (GHI, and orthonormal Hermite approximation (OHA, for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the

  9. Tanks 3F and 2F Saltcake Core and Supernate Sample Analysis

    International Nuclear Information System (INIS)

    MARTINO, CHRISTOPHERJ

    2004-01-01

    In support of Low-Curie Salt (LCS) process validation at the Savannah River Site (SRS), Liquid Waste Disposition (LWD) has undertaken a program of tank waste characterization, including salt sampling. As part of this initiative, they sampled the surface of the saltcake in Tank 3F and Tank 2F using approximately 12-inch long sample tubes. A series of three saltcake samples were taken of the upper crust in Tank 3F and a single saltcake sample was taken from the bottom of a liquid-filled well in Tank 2F. In addition to analysis of the solid saltcake samples, the liquid contained in the Tank 3F samples and a separate supernate sample from Tank 2F were studied. The primary objective of the characterization is to gather information that will be useful to the selection and processing of the next waste tanks. Most important is the determination of the 137Cs concentration and liquid retention properties of Tank 3F and Tank 2F saltcake to enable projection of drained, dissolved salt composition. Additional information will aid in refining the waste characterization system (WCS) and could assist the eventual salt treatment or processing

  10. Layers of Cold Dipolar Molecules in the Harmonic Approximation

    DEFF Research Database (Denmark)

    R. Armstrong, J.; Zinner, Nikolaj Thomas; V. Fedorov, D.

    2012-01-01

    We consider the N-body problem in a layered geometry containing cold polar molecules with dipole moments that are polarized perpendicular to the layers. A harmonic approximation is used to simplify the hamiltonian and bound state properties of the two-body inter-layer dipolar potential are used...... to adjust this effective interaction. To model the intra-layer repulsion of the polar molecules, we introduce a repulsive inter-molecule potential that can be parametrically varied. Single chains containing one molecule in each layer, as well as multi-chain structures in many layers are discussed...... and their energies and radii determined. We extract the normal modes of the various systems as measures of their volatility and eventually of instability, and compare our findings to the excitations in crystals. We find modes that can be classified as either chains vibrating in phase or as layers vibrating against...

  11. Approximation and inference methods for stochastic biochemical kinetics—a tutorial review

    International Nuclear Information System (INIS)

    Schnoerr, David; Grima, Ramon; Sanguinetti, Guido

    2017-01-01

    Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the chemical master equation. Despite its simple structure, no analytic solutions to the chemical master equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics. (topical review)

  12. Optimal random perturbations for stochastic approximation using a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman; Spall, J. C.

    1998-01-01

    simultaneous perturbation approximation to the gradient based on loss function measurements. SPSA is based on picking a simultaneous perturbation (random) vector in a Monte Carlo fashion as part of generating the approximation to the gradient. This paper derives the optimal distribution for the Monte Carlo...

  13. An integrated system for identifying the hidden assassins in traditional medicines containing aristolochic acids

    Science.gov (United States)

    Wu, Lan; Sun, Wei; Wang, Bo; Zhao, Haiyu; Li, Yaoli; Cai, Shaoqing; Xiang, Li; Zhu, Yingjie; Yao, Hui; Song, Jingyuan; Cheng, Yung-Chi; Chen, Shilin

    2015-08-01

    Traditional herbal medicines adulterated and contaminated with plant materials from the Aristolochiaceae family, which contain aristolochic acids (AAs), cause aristolochic acid nephropathy. Approximately 256 traditional Chinese patent medicines, containing Aristolochiaceous materials, are still being sold in Chinese markets today. In order to protect consumers from health risks due to AAs, the hidden assassins, efficient methods to differentiate Aristolochiaceous herbs from their putative substitutes need to be established. In this study, 158 Aristolochiaceous samples representing 46 species and four genera as well as 131 non-Aristolochiaceous samples representing 33 species, 20 genera and 12 families were analyzed using DNA barcodes based on the ITS2 and psbA-trnH sequences. Aristolochiaceous materials and their non-Aristolochiaceous substitutes were successfully identified using BLAST1, the nearest distance method and the neighbor-joining (NJ) tree. In addition, based on sequence information of ITS2, we developed a Real-Time PCR assay which successfully identified herbal material from the Aristolochiaceae family. Using Ultra High Performance Liquid Chromatography-Mass Spectrometer (UHPLC-HR-MS), we demonstrated that most representatives from the Aristolochiaceae family contain toxic AAs. Therefore, integrated DNA barcodes, Real-Time PCR assays using TaqMan probes and UHPLC-HR-MS system provides an efficient and reliable authentication system to protect consumers from health risks due to the hidden assassins (AAs).

  14. UBA domain containing proteins in fission yeast

    DEFF Research Database (Denmark)

    Hartmann-Petersen, Rasmus; Semple, Colin A M; Ponting, Chris P

    2003-01-01

    characterised on both the functional and structural levels. One example of a widespread ubiquitin binding module is the ubiquitin associated (UBA) domain. Here, we discuss the approximately 15 UBA domain containing proteins encoded in the relatively small genome of the fission yeast Schizosaccharomyces pombe...

  15. Sampling and analyses of SRP high-level waste sludges

    International Nuclear Information System (INIS)

    Stone, J.A.; Kelley, J.A.; McMillan, T.S.

    1976-08-01

    Twelve 3-liter samples of high-heat waste sludges were collected from four Savannah River Plant waste tanks with a hydraulically operated sample collector of unique design. Ten of these samples were processed in Savannah River Laboratory shielded cell facilities, yielding 5.3 kg of washed, dried sludge products for waste solidification studies. After initial drying, each batch was washed by settling and decantation to remove the bulk of soluble salts and then was redried. Additional washes were by filtration, followed by final drying. Conclusions from analyses of samples taken during the processing steps were: (a) the raw sludges contained approximately 80 wt percent soluble salts, most of which were removed by the washes; (b) 90 Sr and 238 , 239 Pu remained in the sludges, but most of the 137 Cs was removed by washing; (c) small amounts of sodium, sulfate, and 137 Cs remained in the sludges after thorough washing; (d) no significant differences were found in sludge samples taken from different risers of one waste tank. Chemical and radiometric compositions of the sludge product from each tank were determined. The sludges had diverse compositions, but iron, manganese, aluminum, and uranium were principal elements in each sludge. 90 Sr was the predominant radionuclide in each sludge product

  16. Characteristics of Mae Moh lignite: Hardgrove grindability index and approximate work index

    OpenAIRE

    Wutthiphong Tara; Chairoj Rattanakawin

    2012-01-01

    The purpose of this research was to preliminarily study the Mae Moh lignite grindability tests emphasizing onHardgrove grindability and approximate work index determination respectively. Firstly, the lignite samples were collected,prepared and analyzed for calorific value, total sulfur content, and proximate analysis. After that the Hardgrove grindabilitytest using ball-race test mill was performed. Knowing the Hardgrove indices, the Bond work indices of some samples wereestimated using the A...

  17. k-Means: Random Sampling Procedure

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.

  18. Approximate cohomology in Banach algebras | Pourabbas ...

    African Journals Online (AJOL)

    We introduce the notions of approximate cohomology and approximate homotopy in Banach algebras and we study the relation between them. We show that the approximate homotopically equivalent cochain complexes give the same approximate cohomologies. As a special case, approximate Hochschild cohomology is ...

  19. Recording of interference fringe structure by femtosecond laser pulses in samples of silver-containing porous glass and thick slabs of dichromated gelatin

    Science.gov (United States)

    Andreeva, Olga V.; Dement'ev, Dmitry A.; Chekalin, Sergey V.; Kompanets, V. O.; Matveets, Yu. A.; Serov, Oleg B.; Smolovich, Anatoly M.

    2002-05-01

    The recording geometry and recording media for the method of achromatic wavefront reconstruction are discussed. The femtosecond recording on the thick slabs of dichromated gelatin and the samples of silver-containing porous glass was obtained. The applications of the method to ultrafast laser spectroscopy and to phase conjugation were suggested.

  20. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-01-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration

  1. Multi-level methods and approximating distribution functions

    International Nuclear Information System (INIS)

    Wilson, D.; Baker, R. E.

    2016-01-01

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  2. Multi-level methods and approximating distribution functions

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E. [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  3. Forms of Approximate Radiation Transport

    CERN Document Server

    Brunner, G

    2002-01-01

    Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.

  4. Asymptotic Analysis of Upwind Discontinuous Galerkin Approximation of the Radiative Transport Equation in the Diffusive Limit

    KAUST Repository

    Guermond, Jean-Luc; Kanschat, Guido

    2010-01-01

    We revisit some results from M. L. Adams [Nu cl. Sci. Engrg., 137 (2001), pp. 298- 333]. Using functional analytic tools we prove that a necessary and sufficient condition for the standard upwind discontinuous Galerkin approximation to converge to the correct limit solution in the diffusive regime is that the approximation space contains a linear space of continuous functions, and the restrictions of the functions of this space to each mesh cell contain the linear polynomials. Furthermore, the discrete diffusion limit converges in the Sobolev space H1 to the continuous one if the boundary data is isotropic. With anisotropic boundary data, a boundary layer occurs, and convergence holds in the broken Sobolev space H with s < 1/2 only © 2010 Society for Industrial and Applied Mathematics.

  5. Approximations of Fuzzy Systems

    Directory of Open Access Journals (Sweden)

    Vinai K. Singh

    2013-03-01

    Full Text Available A fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. Such results can be viewed as an existence of optimal fuzzy systems. Li-Xin Wang discussed a similar problem using Gaussian membership function and Stone-Weierstrass Theorem. He established that fuzzy systems, with product inference, centroid defuzzification and Gaussian functions are capable of approximating any real continuous function on a compact set to arbitrary accuracy. In this paper we study a similar approximation problem by using exponential membership functions

  6. Ten mathematical essays on approximation in analysis and topology

    CERN Document Server

    López-Gómez, J; Ruiz del Portal, F R

    2005-01-01

    This book collects 10 mathematical essays on approximation in Analysis and Topology by some of the most influent mathematicians of the last third of the 20th Century. Besides the papers contain the very ultimate results in each of their respective fields, many of them also include a series of historical remarks about the state of mathematics at the time they found their most celebrated results, as well as some of their personal circumstances originating them, which makes particularly attractive the book for all scientist interested in these fields, from beginners to experts. These gem pieces

  7. Cosmological applications of Padé approximant

    International Nuclear Information System (INIS)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation

  8. Cosmological applications of Padé approximant

    Science.gov (United States)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.

  9. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-01-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  10. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-09-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  11. Expectation Consistent Approximate Inference

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2005-01-01

    We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...

  12. Electrophoretic extraction of low molecular weight cationic analytes from sodium dodecyl sulfate containing sample matrices for their direct electrospray ionization mass spectrometry.

    Science.gov (United States)

    Kinde, Tristan F; Lopez, Thomas D; Dutta, Debashis

    2015-03-03

    While the use of sodium dodecyl sulfate (SDS) in separation buffers allows efficient analysis of complex mixtures, its presence in the sample matrix is known to severely interfere with the mass-spectrometric characterization of analyte molecules. In this article, we report a microfluidic device that addresses this analytical challenge by enabling inline electrospray ionization mass spectrometry (ESI-MS) of low molecular weight cationic samples prepared in SDS containing matrices. The functionality of this device relies on the continuous extraction of analyte molecules into an SDS-free solvent stream based on the free-flow zone electrophoresis (FFZE) technique prior to their ESI-MS analysis. The reported extraction was accomplished in our current work in a glass channel with microelectrodes fabricated along its sidewalls to realize the desired electric field. Our experiments show that a key challenge to successfully operating such a device is to suppress the electroosmotically driven fluid circulations generated in its extraction channel that otherwise tend to vigorously mix the liquid streams flowing through this duct. A new coating medium, N-(2-triethoxysilylpropyl) formamide, recently demonstrated by our laboratory to nearly eliminate electroosmotic flow in glass microchannels was employed to address this issue. Applying this surface modifier, we were able to efficiently extract two different peptides, human angiotensin I and MRFA, individually from an SDS containing matrix using the FFZE method and detect them at concentrations down to 3.7 and 6.3 μg/mL, respectively, in samples containing as much as 10 mM SDS. Notice that in addition to greatly reducing the amount of SDS entering the MS instrument, the reported approach allows rapid solvent exchange for facilitating efficient analyte ionization desired in ESI-MS analysis.

  13. Approximation and Computation

    CERN Document Server

    Gautschi, Walter; Rassias, Themistocles M

    2011-01-01

    Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg

  14. Mass extraction container closure integrity physical testing method development for parenteral container closure systems.

    Science.gov (United States)

    Yoon, Seung-Yil; Sagi, Hemi; Goldhammer, Craig; Li, Lei

    2012-01-01

    Container closure integrity (CCI) is a critical factor to ensure that product sterility is maintained over its entire shelf life. Assuring the CCI during container closure (C/C) system qualification, routine manufacturing and stability is important. FDA guidance also encourages industry to develop a CCI physical testing method in lieu of sterility testing in a stability program. A mass extraction system has been developed to check CCI for a variety of container closure systems such as vials, syringes, and cartridges. Various types of defects (e.g., glass micropipette, laser drill, wire) were created and used to demonstrate a detection limit. Leakage, detected as mass flow in this study, changes as a function of defect length and diameter. Therefore, the morphology of defects has been examined in detail with fluid theories. This study demonstrated that a mass extraction system was able to distinguish between intact samples and samples with 2 μm defects reliably when the defect was exposed to air, water, placebo, or drug product (3 mg/mL concentration) solution. Also, it has been verified that the method was robust, and capable of determining the acceptance limit using 3σ for syringes and 6σ for vials. Sterile products must maintain their sterility over their entire shelf life. Container closure systems such as those found in syringes and vials provide a seal between rubber and glass containers. This seal must be ensured to maintain product sterility. A mass extraction system has been developed to check container closure integrity for a variety of container closure systems such as vials, syringes, and cartridges. In order to demonstrate the method's capability, various types of defects (e.g., glass micropipette, laser drill, wire) were created in syringes and vials and were tested. This study demonstrated that a mass extraction system was able to distinguish between intact samples and samples with 2 μm defects reliably when the defect was exposed to air, water

  15. A surprise in the first Born approximation for electron scattering

    International Nuclear Information System (INIS)

    Treacy, M.M.J.; Van Dyck, D.

    2012-01-01

    A standard textbook derivation for the scattering of electrons by a weak potential under the first Born approximation suggests that the far-field scattered wave should be in phase with the incident wave. However, it is well known that waves scattered from a weak phase object should be phase-shifted by π/2 relative to the incident wave. A disturbing consequence of this missing phase is that, according to the Optical Theorem, the total scattering cross section would be zero in the first Born approximation. We resolve this mystery pedagogically by showing that the first Born approximation fails to conserve electrons even to first order. Modifying the derivation to conserve electrons introduces the correct phase without changing the scattering amplitude. We also show that the far-field expansion for the scattered waves used in many texts is inappropriate for computing an exit wave from a sample, and that the near-field expansion also give the appropriately phase-shifted result. -- Highlights: ► The first Born approximation is usually invoked as the theoretical physical basis for kinematical electron scattering theory. ► Although it predicts the correct scattering amplitude, it predicts the wrong phase; the scattered wave is missing a prefactor of i. ► We show that this arises because the standard textbook version of the first Born approximation does not conserve electrons. ► We show how this can be fixed.

  16. Non-linear oscillations of fluid in a container

    NARCIS (Netherlands)

    Verhagen, J.H.G.; van Wijngaarden, L.

    1965-01-01

    This paper is concerned with forced oscillations of fluid in a rectangular container. From the linearized approximation of the equations governing these oscillations, resonance frequencies are obtained for which the amplitude of the oscillations becomes infinite. Observation shows that under these

  17. Constrained Optimization via Stochastic approximation with a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman

    1997-01-01

    This paper deals with a projection algorithm for stochastic approximation using simultaneous perturbation gradient approximation for optimization under inequality constraints where no direct gradient of the loss function is available and the inequality constraints are given as explicit functions...... of the optimization parameters. It is shown that, under application of the projection algorithm, the parameter iterate converges almost surely to a Kuhn-Tucker point, The procedure is illustrated by a numerical example, (C) 1997 Elsevier Science Ltd....

  18. Advanced Curation: Solving Current and Future Sample Return Problems

    Science.gov (United States)

    Fries, M.; Calaway, M.; Evans, C.; McCubbin, F.

    2015-01-01

    Advanced Curation is a wide-ranging and comprehensive research and development effort at NASA Johnson Space Center that identifies and remediates sample related issues. For current collections, Advanced Curation investigates new cleaning, verification, and analytical techniques to assess their suitability for improving curation processes. Specific needs are also assessed for future sample return missions. For each need, a written plan is drawn up to achieve the requirement. The plan draws while upon current Curation practices, input from Curators, the analytical expertise of the Astromaterials Research and Exploration Science (ARES) team, and suitable standards maintained by ISO, IEST, NIST and other institutions. Additionally, new technologies are adopted on the bases of need and availability. Implementation plans are tested using customized trial programs with statistically robust courses of measurement, and are iterated if necessary until an implementable protocol is established. Upcoming and potential NASA missions such as OSIRIS-REx, the Asteroid Retrieval Mission (ARM), sample return missions in the New Frontiers program, and Mars sample return (MSR) all feature new difficulties and specialized sample handling requirements. The Mars 2020 mission in particular poses a suite of challenges since the mission will cache martian samples for possible return to Earth. In anticipation of future MSR, the following problems are among those under investigation: What is the most efficient means to achieve the less than 1.0 ng/sq cm total organic carbon (TOC) cleanliness required for all sample handling hardware? How do we maintain and verify cleanliness at this level? The Mars 2020 Organic Contamination Panel (OCP) predicts that organic carbon, if present, will be present at the "one to tens" of ppb level in martian near-surface samples. The same samples will likely contain wt% perchlorate salts, or approximately 1,000,000x as much perchlorate oxidizer as organic carbon

  19. Quantitative assessment of submicron scale anisotropy in tissue multifractality by scattering Mueller matrix in the framework of Born approximation

    Science.gov (United States)

    Das, Nandan Kumar; Dey, Rajib; Chakraborty, Semanti; Panigrahi, Prasanta K.; Meglinski, Igor; Ghosh, Nirmalya

    2018-04-01

    A number of tissue-like disordered media exhibit local anisotropy of scattering in the scaling behavior. Scaling behavior contains wealth of fractal or multifractal properties. We demonstrate that the spatial dielectric fluctuations in a sample of biological tissue exhibit multifractal anisotropy. Multifractal anisotropy encoded in the wavelength variation of the light scattering Mueller matrix and manifesting as an intriguing spectral diattenuation effect. We developed an inverse method for the quantitative assessment of the multifractal anisotropy. The method is based on the processing of relevant Mueller matrix elements in Fourier domain by using Born approximation, followed by the multifractal analysis. The approach promises for probing subtle micro-structural changes in biological tissues associated with the cancer and precancer, as well as for non-destructive characterization of a wide range of scattering materials.

  20. Some advances in importance sampling of reliability models based on zero variance approximation

    NARCIS (Netherlands)

    Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Scheinhardt, Willem R.W.; Juneja, Sandeep

    We are interested in estimating, through simulation, the probability of entering a rare failure state before a regeneration state. Since this probability is typically small, we apply importance sampling. The method that we use is based on finding the most likely paths to failure. We present an

  1. Some results in Diophantine approximation

    DEFF Research Database (Denmark)

    Pedersen, Steffen Højris

    the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered......This thesis consists of three papers in Diophantine approximation, a subbranch of number theory. Preceding these papers is an introduction to various aspects of Diophantine approximation and formal Laurent series over Fq and a summary of each of the three papers. The introduction introduces...

  2. Photoelectron spectroscopy and the dipole approximation

    Energy Technology Data Exchange (ETDEWEB)

    Hemmers, O.; Hansen, D.L.; Wang, H. [Univ. of Nevada, Las Vegas, NV (United States)] [and others

    1997-04-01

    Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.

  3. Dissociation behavior of pellet shaped mixed gas hydrate samples that contain propane as a guest

    International Nuclear Information System (INIS)

    Kawamura, Taro; Sakamoto, Yasuhide; Ohtake, Michika; Yamamoto, Yoshitaka; Komai, Takeshi; Haneda, Hironori; Yoon, Ji-Ho; Ohga, Kotaro

    2006-01-01

    The dissociation kinetics of mixed gas hydrates that contain propane as a guest molecule have been investigated. The mixed gas hydrates used in this work were artificially prepared using the binary gas mixture of methane-propane and the ternary gas mixture of methane-ethane-propane. The crystal structures and the guest compositions of the mixed hydrates were clearly identified by using Raman spectroscopy and gas chromatography. The dissociation rates of the gas hydrates observed under several isothermal and isobaric conditions were discussed with an analytical model. The isobaric conditions were achieved by pressurizing with mixed gases using buffer cylinders, which had similar compositions to those of the initial gases used for synthesizing each hydrate sample. Interestingly, the calculated result agreed well with the experimentally observed results only when the composition of the vapor phase was assumed to be identical with that of the hydrate phase instead of the bulk (equilibrium) gas composition

  4. Corrosion of alloy 22 in phosphate and chloride containing solutions

    International Nuclear Information System (INIS)

    Carranza, Ricardo M.

    2007-01-01

    Alloy C-22 is a Ni-based alloy (22% Cr, 13% Mo, 3% W y 3% Fe in weight per cent) that exhibits an excellent uniform and localized corrosion resistance due to its protective passive film. It was designed to resist the most aggressive environments for industrial applications. Alloy 22 is one of the candidates to be considered for the outer shell of the canister that would contain high level radioactive nuclear wastes. The effect of phosphate ion in chloride containing solutions at 90 C degrees was studied under aggressive conditions were this material might be susceptible to crevice corrosion. The electrolyte solution, which consisted of 1M NaCl and different phosphate concentrations (between 10 -3 M and 1M), was deoxygenated by bubbling with nitrogen. Electrochemical tests, electron microscope observations (SEM) and energy dispersive spectrometer analysis (EDS) were conducted. Crevice corrosion was not detected and the comparison of the potentiodynamic polarization tests showed an increase of the passivity range in phosphate containing solutions. The passive current value was 1 μA/cm 2 approximately in all the tests that were performed in this work. The differences in composition of the anodic film formed on the samples suggest that phosphate is responsible for the increase of the passivity range by incorporation to the passive film. (author)

  5. Method of approximating the effects of blast mitigation materials on particulate-containing clouds formed by explosions

    International Nuclear Information System (INIS)

    Dyckes, G.W.

    1983-09-01

    A numerical model was developed for predicting the effect of blast mitigation materials on the rise and entrainment rate of explosively driven buoyant clouds containing radiotoxic particles. Model predictions for clouds from unmitigated explosions agree with published observations. More experimental data are needed to assess the validity of predictions for clouds from mitigated explosions

  6. Calculating properties with the coherent-potential approximation

    International Nuclear Information System (INIS)

    Faulkner, J.S.; Stocks, G.M.

    1980-01-01

    It is demonstrated that the expression that has hitherto been used for calculating the Bloch spectral-density function A/sup B/(E,k) in the Korringa-Kohn-Rostoker coherent-potential-approximation theory of alloys leads to manifestly unphysical results. No manipulation of the expression can eliminate this behavior. We develop an averaged Green's-function formulation and from it derive a new expression for A/sup B/(E,k) which does not contain unphysical features. The earlier expression for A/sup B/(E,k) was suggested as plausible on the basis that it is a spectral decomposition of the Lloyd formula. Expressions for many other properties of alloys have been obtained by manipulations of the Lloyd formula, and it is now clear that all such expressions must be considered suspect. It is shown by numerical and algebraic comparisons that some of the expressions obtained in this way are equivalent to the ones obtained from a Green's function, while others are not. In addition to studying these questions, the averaged Green's-function formulation developed in this paper is shown to furnish an interesting new way to approach many problems in alloy theory. The method is described in such a way that the aspects of the formulation that arise from the single-site approximation can be distinguished from those that depend on a specific choice for the effective scatterer

  7. Characterization of Hanford tank wastes containing ferrocyanides

    International Nuclear Information System (INIS)

    Tingey, J.M.; Matheson, J.D.; McKinley, S.G.; Jones, T.E.; Pool, K.H.

    1993-02-01

    Currently, 17 storage tanks on the Hanford site that are believed to contain > 1,000 gram moles (465 lbs) of ferrocyanide compounds have been identified. Seven other tanks are classified as ferrocyanide containing waste tanks, but contain less than 1,000 gram moles of ferrocyanide compounds. These seven tanks are still included as Hanford Watch List Tanks. These tanks have been declared an unreviewed safety question (USQ) because of potential thermal reactivity hazards associated with the ferrocyanide compounds and nitrate and nitrite. Hanford tanks with waste containing > 1,000 gram moles of ferrocyanide have been sampled. Extensive chemical, radiothermical, and physical characterization have been performed on these waste samples. The reactivity of these wastes were also studied using Differential Scanning Calorimetry (DSC) and Thermogravimetric analysis. Actual tank waste samples were retrieved from tank 241-C-112 using a specially designed and equipped core-sampling truck. Only a small portion of the data obtained from this characterization effort will be reported in this paper. This report will deal primarily with the cyanide and carbon analyses, thermal analyses, and limited physical property measurements

  8. Whole-genome gene expression profiling of formalin-fixed, paraffin-embedded tissue samples.

    Directory of Open Access Journals (Sweden)

    Craig April

    2009-12-01

    Full Text Available We have developed a gene expression assay (Whole-Genome DASL, capable of generating whole-genome gene expression profiles from degraded samples such as formalin-fixed, paraffin-embedded (FFPE specimens.We demonstrated a similar level of sensitivity in gene detection between matched fresh-frozen (FF and FFPE samples, with the number and overlap of probes detected in the FFPE samples being approximately 88% and 95% of that in the corresponding FF samples, respectively; 74% of the differentially expressed probes overlapped between the FF and FFPE pairs. The WG-DASL assay is also able to detect 1.3-1.5 and 1.5-2 -fold changes in intact and FFPE samples, respectively. The dynamic range for the assay is approximately 3 logs. Comparing the WG-DASL assay with an in vitro transcription-based labeling method yielded fold-change correlations of R(2 approximately 0.83, while fold-change comparisons with quantitative RT-PCR assays yielded R(2 approximately 0.86 and R(2 approximately 0.55 for intact and FFPE samples, respectively. Additionally, the WG-DASL assay yielded high self-correlations (R(2>0.98 with low intact RNA inputs ranging from 1 ng to 100 ng; reproducible expression profiles were also obtained with 250 pg total RNA (R(2 approximately 0.92, with approximately 71% of the probes detected in 100 ng total RNA also detected at the 250 pg level. When FFPE samples were assayed, 1 ng total RNA yielded self-correlations of R(2 approximately 0.80, while still maintaining a correlation of R(2 approximately 0.75 with standard FFPE inputs (200 ng.Taken together, these results show that WG-DASL assay provides a reliable platform for genome-wide expression profiling in archived materials. It also possesses utility within clinical settings where only limited quantities of samples may be available (e.g. microdissected material or when minimally invasive procedures are performed (e.g. biopsied specimens.

  9. The NOSAMS sample preparation laboratory in the next millenium: Progress after the WOCE program

    International Nuclear Information System (INIS)

    Gagnon, Alan R.; McNichol, Ann P.; Donoghue, Joanne C.; Stuart, Dana R.; Reden, Karl von

    2000-01-01

    Since 1991, the primary charge of the National Ocean Sciences AMS (NOSAMS) facility at the Woods Hole Oceanographic Institution has been to supply high throughput, high precision AMS 14 C analyses for seawater samples collected as part of the World Ocean Circulation Experiment (WOCE). Approximately 13,000 samples taken as part of WOCE should be fully analyzed by the end of Y2K. Additional sample sources and techniques must be identified and incorporated if NOSAMS is to continue in its present operation mode. A trend in AMS today is the ability to routinely process and analyze radiocarbon samples that contain tiny amounts ( 14 C analysis has been recognized as a major facility goal. The installation of a new 134-position MC-SNICS ion source, which utilizes a smaller graphite target cartridge than presently used, is one step towards realizing this goal. New preparation systems constructed in the sample preparation laboratory (SPL) include an automated bank of 10 small-volume graphite reactors, an automated system to process organic carbon samples, and a multi-dimensional preparative capillary gas chromatograph (PCGC)

  10. Applied pressure-dependent anisotropic grain connectivity in shock consolidated MgB{sub 2} samples

    Energy Technology Data Exchange (ETDEWEB)

    Ohashi, Wataru [Graduate School of Engineering, University of Yamanashi, Takeda 4-3-11, Kofu 400-8511 (Japan); Takenaka, Kenta [Graduate School of Engineering, University of Yamanashi, Takeda 4-3-11, Kofu 400-8511 (Japan); Kondo, Tadashi [Graduate School of Engineering, University of Yamanashi, Takeda 4-3-11, Kofu 400-8511 (Japan); Tamaki, Hideyuki [Graduate School of Engineering, University of Yamanashi, Takeda 4-3-11, Kofu 400-8511 (Japan); Matsuzawa, Hidenori [Graduate School of Engineering, University of Yamanashi, Takeda 4-3-11, Kofu 400-8511 (Japan)]. E-mail: matuzawa@mx3.nns.ne.jp; Kai, Shoichiro [Advanced Materials and Process Development Group, Explosive Division, Asahi Kasei Chemicals Corporation, Oita 870-0392 (Japan); Kakimoto, Etsuji [Advanced Materials and Process Development Group, Explosive Division, Asahi Kasei Chemicals Corporation, Oita 870-0392 (Japan); Takano, Yoshihiko [National Institute for Materials Science, Tsukuba 305-0047 (Japan); Minehara, Eisuke [FEL Laboratory, Tokai Site, Japan Atomic Energy Research Institute, Shirakata-shirane 2-4, Tokai, Ibaraki 319-1195 (Japan)

    2006-09-15

    Three different cylindrical MgB{sub 2} bulk samples were prepared by the underwater shock consolidation method in which shock waves of several GPa, generated by detonation of explosives, were applied to a metallic cylinder containing commercially available MgB{sub 2} powders with no additives. Resistivity anisotropy of the samples increased with shock pressure. The highest- and medium-pressure applied samples had finite resistivities in the radial direction for the whole temperature range down to 12 K, whereas their axial and azimuthal resistivities dropped to zero at 32-35 K. By contrast, the lowest-pressure applied sample was approximately isotropic with a normal-state resistivity of {approx}40 {mu}{omega} cm, an onset temperature of {approx}38.5 K, and a transition width of {approx}4.5 K. These extremely anisotropic properties would have resulted from the distortion of grain boundaries and grain cores, caused by the shock pressures and their repeated bouncing.

  11. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  12. Grain Growth in Samples of Aluminum Containing Alumina Particles

    DEFF Research Database (Denmark)

    Tweed, C. J.; Hansen, Niels; Ralph, B.

    1983-01-01

    A study of the two-dimensional and three-dimensional grain size distributions before and after grain growth treatments has been made in samples having a range of oxide contents. In order to collect statistically useful amounts of data, an automatic image analyzer was used and the resulting data w...

  13. Limitations of shallow nets approximation.

    Science.gov (United States)

    Lin, Shao-Bo

    2017-10-01

    In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Exact fluctuations of nonequilibrium steady states from approximate auxiliary dynamics

    OpenAIRE

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2017-01-01

    We describe a framework to significantly reduce the computational effort to evaluate large deviation functions of time integrated observables within nonequilibrium steady states. We do this by incorporating an auxiliary dynamics into trajectory based Monte Carlo calculations, through a transformation of the system's propagator using an approximate guiding function. This procedure importance samples the trajectories that most contribute to the large deviation function, mitigating the exponenti...

  15. Critique of the Brownian approximation to the generalized Langevin equation in lattice dynamics

    International Nuclear Information System (INIS)

    Diestler, D.J.; Riley, M.E.

    1985-01-01

    We consider the classical motion of a harmonic lattice in which only those atoms in a certain subset of the lattice (primary zone) may interact with an external force. The formally exact generalized Langevin equation (GLE) for the primary zone is an appropriate description of the dynamics. We examine a previously proposed Brownian, or frictional damping, approximation that reduces the GLE to a set of coupled ordinary Langevin equations for the primary atoms. It is shown that the solution of these equations can contain undamped motion if there is more than one atom in the primary zone. Such motion is explicitly demonstrated for a model that has been used to describe energy transfer in atom--surface collisions. The inability of the standard Brownian approximation to yield an acceptable, physically meaningful result for primary zones comprising more than one atom suggests that the Brownian approximation may introduce other spurious dynamical effects. Further work on damping of correlated motion in lattices is needed

  16. Approximate circuits for increased reliability

    Science.gov (United States)

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  17. Derivation of fluid dynamics from kinetic theory with the 14-moment approximation

    International Nuclear Information System (INIS)

    Denicol, G.S.; Molnar, E.; Niemi, H.; Rischke, D.H.

    2012-01-01

    We review the traditional derivation of the fluid-dynamical equations from kinetic theory according to Israel and Stewart. We show that their procedure to close the fluid-dynamical equations of motion is not unique. Their approach contains two approximations, the first being the so-called 14-moment approximation to truncate the single-particle distribution function. The second consists in the choice of equations of motion for the dissipative currents. Israel and Stewart used the second moment of the Boltzmann equation, but this is not the only possible choice. In fact, there are infinitely many moments of the Boltzmann equation which can serve as equations of motion for the dissipative currents. All resulting equations of motion have the same form, but the transport coefficients are different in each case. (orig.)

  18. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey; Alkhalifah, Tariq Ali

    2013-01-01

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  19. Analytical approximation of neutron physics data

    International Nuclear Information System (INIS)

    Badikov, S.A.; Vinogradov, V.A.; Gaj, E.V.; Rabotnov, N.S.

    1984-01-01

    The method for experimental neutron-physical data analytical approximation by rational functions based on the Pade approximation is suggested. It is shown that the existence of the Pade approximation specific properties in polar zones is an extremely favourable analytical property essentially extending the convergence range and increasing its rate as compared with polynomial approximation. The Pade approximation is the particularly natural instrument for resonance curve processing as the resonances conform to the complex poles of the approximant. But even in a general case analytical representation of the data in this form is convenient and compact. Thus representation of the data on the neutron threshold reaction cross sections (BOSPOR constant library) in the form of rational functions lead to approximately twenty fold reduction of the storaged numerical information as compared with the by-point calculation at the same accWracy

  20. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey

    2013-11-21

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  1. Nuclear Hartree-Fock approximation testing and other related approximations

    International Nuclear Information System (INIS)

    Cohenca, J.M.

    1970-01-01

    Hartree-Fock, and Tamm-Dancoff approximations are tested for angular momentum of even-even nuclei. Wave functions, energy levels and momenta are comparatively evaluated. Quadripole interactions are studied following the Elliott model. Results are applied to Ne 20 [pt

  2. Containment performance working group report. Draft report for comment

    International Nuclear Information System (INIS)

    1985-05-01

    Containment buildings for power reactors have been studied to estimate their leak rate as a function of increasing internal pressure and temperature associated with severe accident sequences involving significant core damage. Potential leak paths through containment penetration assemblies (such as equipment hatches, airlocks, purge and vent valves, and electrical penetrations) have been identified and their contributions to leak area for the containment are incorporated into containment response analyses of selected severe accident sequences to predict the containment leak rate and pressure/temperature response as a function of time. Because of lack of reliable experimental data on the leakage behavior of containment penetrations and isolation barriers at pressures beyond their design conditions, an analytical approach has been used to estimate the leakage behavior of components found in specific reference plants that approximately characterize the various containment types

  3. Approximate analytical solution of diffusion equation with fractional time derivative using optimal homotopy analysis method

    Directory of Open Access Journals (Sweden)

    S. Das

    2013-12-01

    Full Text Available In this article, optimal homotopy-analysis method is used to obtain approximate analytic solution of the time-fractional diffusion equation with a given initial condition. The fractional derivatives are considered in the Caputo sense. Unlike usual Homotopy analysis method, this method contains at the most three convergence control parameters which describe the faster convergence of the solution. Effects of parameters on the convergence of the approximate series solution by minimizing the averaged residual error with the proper choices of parameters are calculated numerically and presented through graphs and tables for different particular cases.

  4. Approximate Implicitization Using Linear Algebra

    Directory of Open Access Journals (Sweden)

    Oliver J. D. Barrowclough

    2012-01-01

    Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.

  5. Continuous sampling from distributed streams

    DEFF Research Database (Denmark)

    Graham, Cormode; Muthukrishnan, S.; Yi, Ke

    2012-01-01

    A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple distribu......A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple...... distributed sites. The main challenge is to ensure that a sample is drawn uniformly across the union of the data while minimizing the communication needed to run the protocol on the evolving data. At the same time, it is also necessary to make the protocol lightweight, by keeping the space and time costs low...... for each participant. In this article, we present communication-efficient protocols for continuously maintaining a sample (both with and without replacement) from k distributed streams. These apply to the case when we want a sample from the full streams, and to the sliding window cases of only the W most...

  6. Preparation of microspheres containing methyl methacrylate (MMA) with magnetic nanoparticles; Preparacao de microesferas contendo metacrilato de metila (PMMA) com nanoparticulas magneticas

    Energy Technology Data Exchange (ETDEWEB)

    Feuser, P.E.; Souza, M.N. de, E-mail: paulofeuser@hotmail.co, E-mail: nele@eq.ufrj.b [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Dept. de Engenharia Quimica

    2010-07-01

    Magnetic nanoparticles have found many technological applications and has been intensively studied due to its special magnetic properties. In most biomedical applications, microspheres containing magnetic nanoparticles is used as a vehicle for transporting drugs, presenting several advantages when compared to other conventional methods. PMMA is a polymer which has biocompatibility and can be used for the encapsulation of magnetic nanoparticles, showing a great degree of saturation magnetization. PMMA microparticles containing magnetic nanoparticles were prepared by suspension polymerization. Polymers containing magnetic nanoparticles were characterized by X-ray diffraction (XRD), vibrating sample magnetization, thermogravimetric analysis, optical microscopy, chromatography gel permeation, analysis of particle size - malversizer 2000 (Malvern Instruments). The average size of magnetic nanoparticles was approximately 150 {mu}m and depending on the amount of magnetic nanoparticles in the reaction medium Mw of microspheres can be altered. (author)

  7. Risk approximation in decision making: approximative numeric abilities predict advantageous decisions under objective risk.

    Science.gov (United States)

    Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias

    2018-01-22

    Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.

  8. Role of americium interference in analysis of samples containing rare earths

    International Nuclear Information System (INIS)

    Mohapatra, P.K.; Adya, V.C.; Thulasidas, S.K.; Bhattacharyya, A.; Kumar, Mithlesh; Godbole, S.V.; Manchanda, V.K.

    2007-01-01

    Quality control of nuclear fuel samples requires precise estimation of rare earths which have high neutron absorption cross sections and act as neutron poisons. Am is generated by nuclear decay where as lanthanides may be present as impurities picked up during reprocessing/fuel fabrication. Precise estimation of the rare earths by ICP-AES method in presence of 241 Am is a challenging task due to the likelihood of spectral interference of the latter. Rare earths impurities in the purified Am sample were estimated by ICP-AES method. Known amounts of the rare earths viz. Sm, Eu, Dy and Gd were used as synthetic sample and the interference due to Am was investigated. (author)

  9. ABCtoolbox: a versatile toolkit for approximate Bayesian computations

    Directory of Open Access Journals (Sweden)

    Neuenschwander Samuel

    2010-03-01

    Full Text Available Abstract Background The estimation of demographic parameters from genetic data often requires the computation of likelihoods. However, the likelihood function is computationally intractable for many realistic evolutionary models, and the use of Bayesian inference has therefore been limited to very simple models. The situation changed recently with the advent of Approximate Bayesian Computation (ABC algorithms allowing one to obtain parameter posterior distributions based on simulations not requiring likelihood computations. Results Here we present ABCtoolbox, a series of open source programs to perform Approximate Bayesian Computations (ABC. It implements various ABC algorithms including rejection sampling, MCMC without likelihood, a Particle-based sampler and ABC-GLM. ABCtoolbox is bundled with, but not limited to, a program that allows parameter inference in a population genetics context and the simultaneous use of different types of markers with different ploidy levels. In addition, ABCtoolbox can also interact with most simulation and summary statistics computation programs. The usability of the ABCtoolbox is demonstrated by inferring the evolutionary history of two evolutionary lineages of Microtus arvalis. Using nuclear microsatellites and mitochondrial sequence data in the same estimation procedure enabled us to infer sex-specific population sizes and migration rates and to find that males show smaller population sizes but much higher levels of migration than females. Conclusion ABCtoolbox allows a user to perform all the necessary steps of a full ABC analysis, from parameter sampling from prior distributions, data simulations, computation of summary statistics, estimation of posterior distributions, model choice, validation of the estimation procedure, and visualization of the results.

  10. Fluid sampling tool

    Science.gov (United States)

    Garcia, Anthony R.; Johnston, Roger G.; Martinez, Ronald K.

    1999-05-25

    A fluid sampling tool for sampling fluid from a container. The tool has a fluid collecting portion which is drilled into the container wall, thereby affixing it to the wall. The tool may have a fluid extracting section which withdraws fluid collected by the fluid collecting section. The fluid collecting section has a fluted shank with an end configured to drill a hole into a container wall. The shank has a threaded portion for tapping the borehole. The shank is threadably engaged to a cylindrical housing having an inner axial passageway sealed at one end by a septum. A flexible member having a cylindrical portion and a bulbous portion is provided. The housing can be slid into an inner axial passageway in the cylindrical portion and sealed to the flexible member. The bulbous portion has an outer lip defining an opening. The housing is clamped into the chuck of a drill, the lip of the bulbous section is pressed against a container wall until the shank touches the wall, and the user operates the drill. Wall shavings (kerf) are confined in a chamber formed in the bulbous section as it folds when the shank advances inside the container. After sufficient advancement of the shank, an o-ring makes a seal with the container wall.

  11. Approximating a DSM-5 Diagnosis of PTSD Using DSM-IV Criteria

    Science.gov (United States)

    Rosellini, Anthony J.; Stein, Murray B.; Colpe, Lisa J.; Heeringa, Steven G.; Petukhova, Maria V.; Sampson, Nancy A.; Schoenbaum, Michael; Ursano, Robert J.; Kessler, Ronald C.

    2015-01-01

    Background Diagnostic criteria for DSM-5 posttraumatic stress disorder (PTSD) are in many ways similar to DSM-IV criteria, raising the possibility that it might be possible to closely approximate DSM-5 diagnoses using DSM-IV symptoms. If so, the resulting transformation rules could be used to pool research data based on the two criteria sets. Methods The Pre-Post Deployment Study (PPDS) of the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) administered a blended 30-day DSM-IV and DSM-5 PTSD symptom assessment based on the civilian PTSD Checklist for DSM-IV (PCL-C) and the PTSD Checklist for DSM-5 (PCL-5). This assessment was completed by 9,193 soldiers from three US Army Brigade Combat Teams approximately three months after returning from Afghanistan. PCL-C items were used to operationalize conservative and broad approximations of DSM-5 PTSD diagnoses. The operating characteristics of these approximations were examined compared to diagnoses based on actual DSM-5 criteria. Results The estimated 30-day prevalence of DSM-5 PTSD based on conservative (4.3%) and broad (4.7%) approximations of DSM-5 criteria using DSM-IV symptom assessments were similar to estimates based on actual DSM-5 criteria (4.6%). Both approximations had excellent sensitivity (92.6-95.5%), specificity (99.6-99.9%), total classification accuracy (99.4-99.6%), and area under the receiver operating characteristic curve (0.96-0.98). Conclusions DSM-IV symptoms can be used to approximate DSM-5 diagnoses of PTSD among recently-deployed soldiers, making it possible to recode symptom-level data from earlier DSM-IV studies to draw inferences about DSM-5 PTSD. However, replication is needed in broader trauma-exposed samples to evaluate the external validity of this finding. PMID:25845710

  12. The NOSAMS sample preparation laboratory in the next millenium: Progress after the WOCE program

    Energy Technology Data Exchange (ETDEWEB)

    Gagnon, Alan R. E-mail: agagnon@whoi.edu; McNichol, Ann P.; Donoghue, Joanne C.; Stuart, Dana R.; Reden, Karl von

    2000-10-01

    Since 1991, the primary charge of the National Ocean Sciences AMS (NOSAMS) facility at the Woods Hole Oceanographic Institution has been to supply high throughput, high precision AMS {sup 14}C analyses for seawater samples collected as part of the World Ocean Circulation Experiment (WOCE). Approximately 13,000 samples taken as part of WOCE should be fully analyzed by the end of Y2K. Additional sample sources and techniques must be identified and incorporated if NOSAMS is to continue in its present operation mode. A trend in AMS today is the ability to routinely process and analyze radiocarbon samples that contain tiny amounts (<100 {mu}g) of carbon. The capability to mass-produce small samples for {sup 14}C analysis has been recognized as a major facility goal. The installation of a new 134-position MC-SNICS ion source, which utilizes a smaller graphite target cartridge than presently used, is one step towards realizing this goal. New preparation systems constructed in the sample preparation laboratory (SPL) include an automated bank of 10 small-volume graphite reactors, an automated system to process organic carbon samples, and a multi-dimensional preparative capillary gas chromatograph (PCGC)

  13. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    Science.gov (United States)

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  14. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    International Nuclear Information System (INIS)

    Jakeman, J.D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation

  15. Multilinear analysis of Time-Resolved Laser-Induced Fluorescence Spectra of U(VI) containing natural water samples

    Science.gov (United States)

    Višňák, Jakub; Steudtner, Robin; Kassahun, Andrea; Hoth, Nils

    2017-09-01

    Natural waters' uranium level monitoring is of great importance for health and environmental protection. One possible detection method is the Time-Resolved Laser-Induced Fluorescence Spectroscopy (TRLFS), which offers the possibility to distinguish different uranium species. The analytical identification of aqueous uranium species in natural water samples is of distinct importance since individual species differ significantly in sorption properties and mobility in the environment. Samples originate from former uranium mine sites and have been provided by Wismut GmbH, Germany. They have been characterized by total elemental concentrations and TRLFS spectra. Uranium in the samples is supposed to be in form of uranyl(VI) complexes mostly with carbonate (CO32- ) and bicarbonate (HCO3- ) and to lesser extend with sulphate (SO42- ), arsenate (AsO43- ), hydroxo (OH- ), nitrate (NO3- ) and other ligands. Presence of alkaline earth metal dications (M = Ca2+ , Mg2+ , Sr2+ ) will cause most of uranyl to prefer ternary complex species, e.g. Mn(UO2)(CO3)32n-4 (n ɛ {1; 2}). From species quenching the luminescence, Cl- and Fe2+ should be mentioned. Measurement has been done under cryogenic conditions to increase the luminescence signal. Data analysis has been based on Singular Value Decomposition and monoexponential fit of corresponding loadings (for separate TRLFS spectra, the "Factor analysis of Time Series" (FATS) method) and Parallel Factor Analysis (PARAFAC, all data analysed simultaneously). From individual component spectra, excitation energies T00, uranyl symmetric mode vibrational frequencies ωgs and excitation driven U-Oyl bond elongation ΔR have been determined and compared with quasirelativistic (TD)DFT/B3LYP theoretical predictions to cross -check experimental data interpretation. Note to the reader: Several errors have been produced in the initial version of this article. This new version published on 23 October 2017 contains all the corrections.

  16. Approximate models for neutral particle transport calculations in ducts

    International Nuclear Information System (INIS)

    Ono, Shizuca

    2000-01-01

    The problem of neutral particle transport in evacuated ducts of arbitrary, but axially uniform, cross-sectional geometry and isotropic reflection at the wall is studied. The model makes use of basis functions to represent the transverse and azimuthal dependences of the particle angular flux in the duct. For the approximation in terms of two basis functions, an improvement in the method is implemented by decomposing the problem into uncollided and collided components. A new quadrature set, more suitable to the problem, is developed and generated by one of the techniques of the constructive theory of orthogonal polynomials. The approximation in terms of three basis functions is developed and implemented to improve the precision of the results. For both models of two and three basis functions, the energy dependence of the problem is introduced through the multigroup formalism. The results of sample problems are compared to literature results and to results of the Monte Carlo code, MCNP. (author)

  17. The bathtub vortex in a rotating container

    DEFF Research Database (Denmark)

    Andersen, Anders Peter; Bohr, Tomas; Stenum, B.

    2006-01-01

    We study the time-independent free-surface flow which forms when a fluid drains out of a container, a so-called bathtub vortex. We focus on the bathtub vortex in a rotating container and describe the free-surface shape and the complex flow structure using photographs of the free surface, flow...... expansion approximation of the central vortex core and reduce the model to a single first-order equation. We solve the equation numerically and find that the axial velocity depends linearly on height whereas the azimuthal velocity is almost independent of height. We discuss the model of the bathtub vortex...

  18. Nonlinear approximation with dictionaries I. Direct estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    2004-01-01

    We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation w...

  19. Laboratory Sampling Guide

    Science.gov (United States)

    2012-05-11

    environment, and by ingestion of foodstuffs that have incorporated C-14 by photosynthesis . Like tritium, C-14 is a very low energy beta emitter and is... bacterial growth and to minimize development of solids in the sample. • Properly identify each sample container with name, SSN, and collection start and...sampling in the same cardboard carton. The sample may be kept cool or frozen during collection to control odor and bacterial growth. • Once

  20. Error estimates for the Fourier-finite-element approximation of the Lame system in nonsmooth axisymmetric domains

    International Nuclear Information System (INIS)

    Nkemzi, Boniface

    2003-10-01

    This paper is concerned with the effective implementation of the Fourier-finite-element method, which combines the approximating Fourier and the finite-element methods, for treating the Derichlet problem for the Lam.6 equations in axisymmetric domains Ω-circumflex is contained in R 3 with conical vertices and reentrant edges. The partial Fourier decomposition reduces the three-dimensional boundary value problem to an infinite sequence of decoupled two-dimensional boundary value problems on the plane meridian domain Ω α is contained in R + 2 of Ω-circumflex with solutions u, n (n = 0,1,2,...) being the Fourier coefficients of the solution u of the 3D problem. The asymptotic behavior of the Fourier coefficients near the angular points of Ω α , is described by appropriate singular vector-functions and treated numerically by linear finite elements on locally graded meshes. For the right-hand side function f-circumflex is an element of (L 2 (Ω-circumflex)) 3 it is proved that with appropriate mesh grading the rate of convergence of the combined approximations in (W 2 1 (Ω-circumflex)) 3 is of the order O(h + N -1 ), where h and N are the parameters of the finite-element and Fourier approximations, respectively, with h → 0 and N → ∞. (author)

  1. Spline approximation, Part 1: Basic methodology

    Science.gov (United States)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  2. Annealing evolutionary stochastic approximation Monte Carlo for global optimization

    KAUST Repository

    Liang, Faming

    2010-04-08

    In this paper, we propose a new algorithm, the so-called annealing evolutionary stochastic approximation Monte Carlo (AESAMC) algorithm as a general optimization technique, and study its convergence. AESAMC possesses a self-adjusting mechanism, whose target distribution can be adapted at each iteration according to the current samples. Thus, AESAMC falls into the class of adaptive Monte Carlo methods. This mechanism also makes AESAMC less trapped by local energy minima than nonadaptive MCMC algorithms. Under mild conditions, we show that AESAMC can converge weakly toward a neighboring set of global minima in the space of energy. AESAMC is tested on multiple optimization problems. The numerical results indicate that AESAMC can potentially outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization. © 2010 Springer Science+Business Media, LLC.

  3. Approximate Bayesian Computation by Subset Simulation using hierarchical state-space models

    Science.gov (United States)

    Vakilzadeh, Majid K.; Huang, Yong; Beck, James L.; Abrahamsson, Thomas

    2017-02-01

    A new multi-level Markov Chain Monte Carlo algorithm for Approximate Bayesian Computation, ABC-SubSim, has recently appeared that exploits the Subset Simulation method for efficient rare-event simulation. ABC-SubSim adaptively creates a nested decreasing sequence of data-approximating regions in the output space that correspond to increasingly closer approximations of the observed output vector in this output space. At each level, multiple samples of the model parameter vector are generated by a component-wise Metropolis algorithm so that the predicted output corresponding to each parameter value falls in the current data-approximating region. Theoretically, if continued to the limit, the sequence of data-approximating regions would converge on to the observed output vector and the approximate posterior distributions, which are conditional on the data-approximation region, would become exact, but this is not practically feasible. In this paper we study the performance of the ABC-SubSim algorithm for Bayesian updating of the parameters of dynamical systems using a general hierarchical state-space model. We note that the ABC methodology gives an approximate posterior distribution that actually corresponds to an exact posterior where a uniformly distributed combined measurement and modeling error is added. We also note that ABC algorithms have a problem with learning the uncertain error variances in a stochastic state-space model and so we treat them as nuisance parameters and analytically integrate them out of the posterior distribution. In addition, the statistical efficiency of the original ABC-SubSim algorithm is improved by developing a novel strategy to regulate the proposal variance for the component-wise Metropolis algorithm at each level. We demonstrate that Self-regulated ABC-SubSim is well suited for Bayesian system identification by first applying it successfully to model updating of a two degree-of-freedom linear structure for three cases: globally

  4. Prediction of the Main Engine Power of a New Container Ship at the Preliminary Design Stage

    Science.gov (United States)

    Cepowski, Tomasz

    2017-06-01

    The paper presents mathematical relationships that allow us to forecast the estimated main engine power of new container ships, based on data concerning vessels built in 2005-2015. The presented approximations allow us to estimate the engine power based on the length between perpendiculars and the number of containers the ship will carry. The approximations were developed using simple linear regression and multivariate linear regression analysis. The presented relations have practical application for estimation of container ship engine power needed in preliminary parametric design of the ship. It follows from the above that the use of multiple linear regression to predict the main engine power of a container ship brings more accurate solutions than simple linear regression.

  5. From Mie to Fresnel through effective medium approximation with multipole contributions

    International Nuclear Information System (INIS)

    Malasi, Abhinav; Kalyanaraman, Ramki; Garcia, Hernando

    2014-01-01

    The Mie theory gives the exact solution to scattering from spherical particles while the Fresnel theory provides the solution to optical behavior of multilayer thin film structures. Often, the bridge between the two theories to explain the behavior of materials such as nanoparticles in a host dielectric matrix, is done by effective medium approximation (EMA) models which exclusively rely on the dipolar response of the scattering objects. Here, we present a way to capture multipole effects using EMA. The effective complex dielectric function of the composite is derived using the Clausius–Mossotti relation and the multipole coefficients of the approximate Mie theory. The optical density (OD) of the dielectric slab is then calculated using the Fresnel approach. We have applied the resulting equation to predict the particle size dependent dipole and quadrupole behavior for spherical Ag nanoparticles embedded in glass matrix. This dielectric function contains the relevant properties of EMA and at the same time predicts the multipole contributions present in the single particle Mie model. (papers)

  6. Testing a groundwater sampling tool: Are the samples representative?

    International Nuclear Information System (INIS)

    Kaback, D.S.; Bergren, C.L.; Carlson, C.A.; Carlson, C.L.

    1989-01-01

    A ground water sampling tool, the HydroPunch trademark, was tested at the Department of Energy's Savannah River Site in South Carolina to determine if representative ground water samples could be obtained without installing monitoring wells. Chemical analyses of ground water samples collected with the HydroPunch trademark from various depths within a borehole were compared with chemical analyses of ground water from nearby monitoring wells. The site selected for the test was in the vicinity of a large coal storage pile and a coal pile runoff basin that was constructed to collect the runoff from the coal storage pile. Existing monitoring wells in the area indicate the presence of a ground water contaminant plume that: (1) contains elevated concentrations of trace metals; (2) has an extremely low pH; and (3) contains elevated concentrations of major cations and anions. Ground water samples collected with the HydroPunch trademark provide in excellent estimate of ground water quality at discrete depths. Groundwater chemical data collected from various depths using the HydroPunch trademark can be averaged to simulate what a screen zone in a monitoring well would sample. The averaged depth-discrete data compared favorably with the data obtained from the nearby monitoring wells

  7. Design-based estimators for snowball sampling

    OpenAIRE

    Shafie, Termeh

    2010-01-01

    Snowball sampling, where existing study subjects recruit further subjects from amongtheir acquaintances, is a popular approach when sampling from hidden populations.Since people with many in-links are more likely to be selected, there will be a selectionbias in the samples obtained. In order to eliminate this bias, the sample data must beweighted. However, the exact selection probabilities are unknown for snowball samplesand need to be approximated in an appropriate way. This paper proposes d...

  8. Insight into structural phase transitions from the decoupled anharmonic mode approximation.

    Science.gov (United States)

    Adams, Donat J; Passerone, Daniele

    2016-08-03

    We develop a formalism (decoupled anharmonic mode approximation, DAMA) that allows calculation of the vibrational free energy using density functional theory even for materials which exhibit negative curvature of the potential energy surface with respect to atomic displacements. We investigate vibrational modes beyond the harmonic approximation and approximate the potential energy surface with the superposition of the accurate potential along each normal mode. We show that the free energy can stabilize crystal structures at finite temperatures which appear dynamically unstable at T  =  0. The DAMA formalism is computationally fast because it avoids statistical sampling through molecular dynamics calculations, and is in principle completely ab initio. It is free of statistical uncertainties and independent of model parameters, but can give insight into the mechanism of a structural phase transition. We apply the formalism to the perovskite cryolite, and investigate the temperature-driven phase transition from the P21/n to the Immm space group. We calculate a phase transition temperature between 710 and 950 K, in fair agreement with the experimental value of 885 K. This can be related to the underestimation of the interaction of the vibrational states. We also calculate the main axes of the thermal ellipsoid and can explain the experimentally observed increase of its volume for the fluorine by 200-300% throughout the phase transition. Our calculations suggest the appearance of tunneling states in the high temperature phase. The convergence of the vibrational DOS and of the critical temperature with respect of reciprocal space sampling is investigated using the polarizable-ion model.

  9. Discrete factor approximations in simultaneous equation models: estimating the impact of a dummy endogenous variable on a continuous outcome.

    Science.gov (United States)

    Mroz, T A

    1999-10-01

    This paper contains a Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models. When the true model has bivariate normal disturbances, estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias; these approximation estimators dominate all the other estimators examined when the disturbances are non-normal. The experiments also indicate that one should liberally add points of support to the discrete factor distribution. The paper concludes with an application of the discrete factor approximation to the estimation of the impact of marriage on wages.

  10. Muonic-hydrogen molecular bound states, quasibound states, and resonances in the Born-Oppenheimer approximation

    International Nuclear Information System (INIS)

    Jackson, J.D.

    1994-01-01

    The Born-Oppenheimer approximation is used as an exploratory tool to study bound states, quasibound states, and scattering resonances in muon (μ)--hydrogen (x)--hydrogen (y) molecular ions. Our purpose is to comment on the existence and nature of the narrow states reported in three-body calculations, for L=0 and 1, at approximately 55 eV above threshold and the family of states in the same partial waves reported about 1.9 keV above threshold. We first discuss the motivation for study of excited states beyond the well-known and well-studied bound states. Then we reproduce the energies and other properties of these well-known states to show that, despite the relatively large muon mass, the Born-Oppenheimer approximation gives a good, semiquantitative description containing all the essential physics. Born-Oppenheimer calculations of the s- and p-wave scattering of d-(dμ), d-(tμ), and t-(tμ) are compared with the accurate three-body results, again with general success. The places of disagreement are understood in terms of the differences in location of slightly bound (or unbound) states in the Born-Oppenheimer approximation compared to the accurate three-body calculations

  11. Discrete least squares polynomial approximation with random evaluations − application to parametric and stochastic elliptic PDEs

    KAUST Repository

    Chkifa, Abdellah

    2015-04-08

    Motivated by the numerical treatment of parametric and stochastic PDEs, we analyze the least-squares method for polynomial approximation of multivariate functions based on random sampling according to a given probability measure. Recent work has shown that in the univariate case, the least-squares method is quasi-optimal in expectation in [A. Cohen, M A. Davenport and D. Leviatan. Found. Comput. Math. 13 (2013) 819–834] and in probability in [G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone, Found. Comput. Math. 14 (2014) 419–456], under suitable conditions that relate the number of samples with respect to the dimension of the polynomial space. Here “quasi-optimal” means that the accuracy of the least-squares approximation is comparable with that of the best approximation in the given polynomial space. In this paper, we discuss the quasi-optimality of the polynomial least-squares method in arbitrary dimension. Our analysis applies to any arbitrary multivariate polynomial space (including tensor product, total degree or hyperbolic crosses), under the minimal requirement that its associated index set is downward closed. The optimality criterion only involves the relation between the number of samples and the dimension of the polynomial space, independently of the anisotropic shape and of the number of variables. We extend our results to the approximation of Hilbert space-valued functions in order to apply them to the approximation of parametric and stochastic elliptic PDEs. As a particular case, we discuss “inclusion type” elliptic PDE models, and derive an exponential convergence estimate for the least-squares method. Numerical results confirm our estimate, yet pointing out a gap between the condition necessary to achieve optimality in the theory, and the condition that in practice yields the optimal convergence rate.

  12. Approximation of Corrected Calcium Concentrations in Advanced Chronic Kidney Disease Patients with or without Dialysis Therapy

    Directory of Open Access Journals (Sweden)

    Yoshio Kaku

    2015-08-01

    Full Text Available Background: The following calcium (Ca correction formula (Payne is conventionally used for serum Ca estimation: corrected total Ca (TCa (mg/dl = TCa (mg/dl + [4 - albumin (g/dl]; however, it is inapplicable to advanced chronic kidney disease (CKD patients. Methods: 1,922 samples in CKD G4 + G5 patients and 341 samples in CKD G5D patients were collected. Levels of TCa (mg/day, ionized Ca2+ (iCa2+ (mmol/l and other clinical parameters were measured. We assumed the corrected TCa to be equal to eight times the iCa2+ value (measured corrected TCa. We subsequently performed stepwise multiple linear regression analysis using the clinical parameters. Results: The following formula was devised from multiple linear regression analysis. For CKD G4 + G5 patients: approximated corrected TCa (mg/dl = TCa + 0.25 × (4 - albumin + 4 × (7.4 - pH + 0.1 × (6 - P + 0.22. For CKD G5D patients: approximated corrected TCa (mg/dl = TCa + 0.25 × (4 - albumin + 0.1 × (6 - P + 0.05 × (24 - HCO3- + 0.35. Receiver operating characteristic analysis showed the high values of the area under the curve of approximated corrected TCa for the detection of measured corrected TCa ≥8.4 mg/dl and ≤10.4 mg/dl for each CKD sample. Both intraclass correlation coefficients for each CKD sample demonstrated superior agreement using the new formula compared to the previously reported formulas. Conclusion: Compared to other formulas, the approximated corrected TCa values calculated from the new formula for patients with CKD G4 + G5 and CKD G5D demonstrates superior agreement with the measured corrected TCa.

  13. Effect of quarterly treatments with a chlorhexidine and a fluoride varnish on approximal caries in caries-susceptible teenagers: a 3-year clinical study.

    Science.gov (United States)

    Petersson, L G; Magnusson, K; Andersson, H; Almquist, B; Twetman, S

    2000-01-01

    The aim of this study was to compare the effect of two different dental varnishes on approximal caries incidence in teenagers with proven caries susceptibility during a 3-year period. Two hundred 13- to 14-year-old subjects exhibiting at least two approximal enamel caries lesions were selected to take part in the study. One hundred and eighty subjects participated after informed consent and were randomly assigned to two equally sized groups. One group was treated with a fluoride varnish (FV, Fluor Protector) containing 0.1% F every 3rd month and the participants of the other group were treated in the same mode with a chlorhexidine varnish (CV, Cervitec((R))) containing 1% chlorhexidine and 1% thymol. In total, each subject was treated 12 times during the experimental period. Approximal caries including enamel lesions (DMFS(appr)) were recorded from four bitewing radiographs exposed at the start and end of the study. The mean (+/-SD) caries prevalence at baseline was 2.2+/-3.4 in the FV group and 2.5+/-4.0 in the CV group. After 3 years, the average approximal caries incidence was 2.7+/-3.1 and 3.1+/-3.5 in the FV and CV groups, respectively. The differences at baseline and after 3 years were not statistically significant. In conclusion, treatments every 3rd month with either a fluoride- or a chlorhexidine/thymol-containing varnish showed a promising effect with low approximal caries incidence and progression in teenagers with proven caries susceptibility.

  14. The efficiency of Flory approximation

    International Nuclear Information System (INIS)

    Obukhov, S.P.

    1984-01-01

    The Flory approximation for the self-avoiding chain problem is compared with a conventional perturbation theory expansion. While in perturbation theory each term is averaged over the unperturbed set of configurations, the Flory approximation is equivalent to the perturbation theory with the averaging over the stretched set of configurations. This imposes restrictions on the integration domain in higher order terms and they can be treated self-consistently. The accuracy δν/ν of Flory approximation for self-avoiding chain problems is estimated to be 2-5% for 1 < d < 4. (orig.)

  15. Fluid sampling tool

    Science.gov (United States)

    Garcia, A.R.; Johnston, R.G.; Martinez, R.K.

    1999-05-25

    A fluid sampling tool is described for sampling fluid from a container. The tool has a fluid collecting portion which is drilled into the container wall, thereby affixing it to the wall. The tool may have a fluid extracting section which withdraws fluid collected by the fluid collecting section. The fluid collecting section has a fluted shank with an end configured to drill a hole into a container wall. The shank has a threaded portion for tapping the borehole. The shank is threadably engaged to a cylindrical housing having an inner axial passageway sealed at one end by a septum. A flexible member having a cylindrical portion and a bulbous portion is provided. The housing can be slid into an inner axial passageway in the cylindrical portion and sealed to the flexible member. The bulbous portion has an outer lip defining an opening. The housing is clamped into the chuck of a drill, the lip of the bulbous section is pressed against a container wall until the shank touches the wall, and the user operates the drill. Wall shavings (kerf) are confined in a chamber formed in the bulbous section as it folds when the shank advances inside the container. After sufficient advancement of the shank, an o-ring makes a seal with the container wall. 6 figs.

  16. Fission neutron irradiation of copper containing implanted and transmutation produced helium

    DEFF Research Database (Denmark)

    Singh, B.N.; Horsewell, A.; Eldrup, Morten Mostgaard

    1992-01-01

    High purity copper containing approximately 100 appm helium was produced in two ways. In the first, helium was implanted by cyclotron at Harwell at 323 K. In the second method, helium was produced as a transmutation product in 800 MeV proton irradiation at Los Alamos, also at 323 K. The distribut......High purity copper containing approximately 100 appm helium was produced in two ways. In the first, helium was implanted by cyclotron at Harwell at 323 K. In the second method, helium was produced as a transmutation product in 800 MeV proton irradiation at Los Alamos, also at 323 K...... as well as the effect of the presence of other transmutation produced impurity atoms in the 800 MeV proton irradiated copper will be discussed....

  17. Feasibility Study of Neutron Multiplicity Assay for a Heterogeneous Sludge Sample containing Na, Pu and other Impurities

    International Nuclear Information System (INIS)

    Nakamura, H.; Nakamichi, H.; Mukai, Y.; Yoshimoto, K.; Beddingfield, D.H.

    2010-01-01

    To reduce radioactivity of liquid waste generated at PCDF, a neutralization precipitation processes of radioactive nuclides by sodium hydroxide is used. We call the precipitate a 'sludge' after calcination. Pu mass in the sludge is normally determined by sampling and DA within the required uncertainty on DIQ. Annual yield of the mass is small but it accumulates and reaches to a few kilograms, so it is declared as retained waste and verified at PIV. A HM-5-based verification is applied for sludge verification. The sludge contains many chemical components. For example, Pu (-10wt%), U, Am, SUS components, halogens, NaNO 3 (main component), residual NaOH, and moisture. They are mixed together as an impure heterogeneous sludge sample. As a result, there is a large uncertainty in the sampling and DA that is currently used at PCDF. In order to improve the material accounting, we performed a feasibility study using neutron multiplicity assay for impure sludge samples. We have measured selected sludge samples using a multiplicity counter which is called FCAS (Fast Carton Assay System) which was designed by JAEA and Canberra. The PCDF sludge materials fall into the category of 'difficult to measure' because of the high levels of impurities, high alpha values and somewhat small Pu mass. For the sludge measurements, it was confirmed that good consistency between Pu mass in a pure sludge standard (PuO 2 -Na 2 U 2 O 7 , alpha=7) and the DA could be obtained. For unknown samples, using 14-hour measurements, we could obtain quite low statistical uncertainty on Doubles (-1%) and Triples (-10%) count rate although the alpha value was extremely high (15-25) and FCAS efficiency was relatively low (40%) for typical multiplicity counters. Despite the detector efficiency challenges and the material challenges (high alpha, low Pu mass, heterogeneous matrix), we have been able to obtain assay results that greatly exceed the accountancy requirements for retained waste materials. We have

  18. Bent approximations to synchrotron radiation optics

    International Nuclear Information System (INIS)

    Heald, S.

    1981-01-01

    Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors

  19. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Björk, Tomas

    2012-11-22

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  20. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Bjö rk, Tomas; Szepessy, Anders; Tempone, Raul; Zouraris, Georgios E.

    2012-01-01

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  1. Seismic transient analysis of a containment vessel with penetrations

    International Nuclear Information System (INIS)

    Dahlke, H.J.; Weiner, E.O.

    1979-12-01

    A linear transient analysis of the FFTF containment vessel was conducted with STAGS to justify the load levels used for the seismic qualification testing of the heating and ventiliation valve operators. The modeling consists of a thin axisymmetric shell for the containment vessel with four penetrations characterized by linear and rotational inertias as well as attachment characteristics to the shell. Motions considered are horizontal, rocking and vertical input to the base, and the solution is carried out by direct integration. Results show that the test levels and the approximate analyses considered are conservative. Response spectra for some containment vessel penetrations applicable to the model are presented

  2. INTOR cost approximation

    International Nuclear Information System (INIS)

    Knobloch, A.F.

    1980-01-01

    A simplified cost approximation for INTOR parameter sets in a narrow parameter range is shown. Plausible constraints permit the evaluation of the consequences of parameter variations on overall cost. (orig.) [de

  3. An unusual presentation of a customs importation seizure containing amphetamine, possibly synthesized by the APAAN-P2P-Leuckart route.

    Science.gov (United States)

    Power, John D; Barry, Michael G; Scott, Kenneth R; Kavanagh, Pierce V

    2014-01-01

    During the analysis of an Irish customs seizure (14 packages each containing approximately one kilogram of a white wet paste) were analysed for the suspected presence of controlled drugs. The samples were found to contain amphetamine and also characteristic by-products including benzyl cyanide, phenylacetone (P2P), methyl-phenyl-pyrimidines, N-formylamphetamine, naphthalene derivatives and amphetamine dimers. The analytical results corresponded with the impurity profile observed and recently reported for the synthesis of 4-methylamphetamine from 4-methylphenylacetoacetonitrile [1]. The synthesis of amphetamine from alpha-phenylacetoacetonitrile (APAAN) was performed (via an acid hydrolysis and subsequent Leuckart reaction) and the impurity profile of the product obtained was compared to those observed in the customs seizure. Observations are made regarding the route specificity of these by-products. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Sampling and sample processing in pesticide residue analysis.

    Science.gov (United States)

    Lehotay, Steven J; Cook, Jo Marie

    2015-05-13

    Proper sampling and sample processing in pesticide residue analysis of food and soil have always been essential to obtain accurate results, but the subject is becoming a greater concern as approximately 100 mg test portions are being analyzed with automated high-throughput analytical methods by agrochemical industry and contract laboratories. As global food trade and the importance of monitoring increase, the food industry and regulatory laboratories are also considering miniaturized high-throughput methods. In conjunction with a summary of the symposium "Residues in Food and Feed - Going from Macro to Micro: The Future of Sample Processing in Residue Analytical Methods" held at the 13th IUPAC International Congress of Pesticide Chemistry, this is an opportune time to review sampling theory and sample processing for pesticide residue analysis. If collected samples and test portions do not adequately represent the actual lot from which they came and provide meaningful results, then all costs, time, and efforts involved in implementing programs using sophisticated analytical instruments and techniques are wasted and can actually yield misleading results. This paper is designed to briefly review the often-neglected but crucial topic of sample collection and processing and put the issue into perspective for the future of pesticide residue analysis. It also emphasizes that analysts should demonstrate the validity of their sample processing approaches for the analytes/matrices of interest and encourages further studies on sampling and sample mass reduction to produce a test portion.

  5. Analytic number theory, approximation theory, and special functions in honor of Hari M. Srivastava

    CERN Document Server

    Rassias, Michael

    2014-01-01

    This book, in honor of Hari M. Srivastava, discusses essential developments in mathematical research in a variety of problems. It contains thirty-five articles, written by eminent scientists from the international mathematical community, including both research and survey works. Subjects covered include analytic number theory, combinatorics, special sequences of numbers and polynomials, analytic inequalities and applications, approximation of functions and quadratures, orthogonality, and special and complex functions. The mathematical results and open problems discussed in this book are presented in a simple and self-contained manner. The book contains an overview of old and new results, methods, and theories toward the solution of longstanding problems in a wide scientific field, as well as new results in rapidly progressing areas of research. The book will be useful for researchers and graduate students in the fields of mathematics, physics, and other computational and applied sciences.

  6. Sample processing method for the determination of perchlorate in milk

    International Nuclear Information System (INIS)

    Dyke, Jason V.; Kirk, Andrea B.; Kalyani Martinelango, P.; Dasgupta, Purnendu K.

    2006-01-01

    In recent years, many different water sources and foods have been reported to contain perchlorate. Studies indicate that significant levels of perchlorate are present in both human and dairy milk. The determination of perchlorate in milk is particularly important due to its potential health impact on infants and children. As for many other biological samples, sample preparation is more time consuming than the analysis itself. The concurrent presence of large amounts of fats, proteins, carbohydrates, etc., demands some initial cleanup; otherwise the separation column lifetime and the limit of detection are both greatly compromised. Reported milk processing methods require the addition of chemicals such as ethanol, acetic acid or acetonitrile. Reagent addition is undesirable in trace analysis. We report here an essentially reagent-free sample preparation method for the determination of perchlorate in milk. Milk samples are spiked with isotopically labeled perchlorate and centrifuged to remove lipids. The resulting liquid is placed in a disposable centrifugal ultrafilter device with a molecular weight cutoff of 10 kDa, and centrifuged. Approximately 5-10 ml of clear liquid, ready for analysis, is obtained from a 20 ml milk sample. Both bovine and human milk samples have been successfully processed and analyzed by ion chromatography-mass spectrometry (IC-MS). Standard addition experiments show good recoveries. The repeatability of the analytical result for the same sample in multiple sample cleanup runs ranged from 3 to 6% R.S.D. This processing technique has also been successfully applied for the determination of iodide and thiocyanate in milk

  7. Protein precipitation of diluted samples in SDS-containing buffer with acetone leads to higher protein recovery and reproducibility in comparison with TCA/acetone approach.

    Science.gov (United States)

    Santa, Cátia; Anjo, Sandra I; Manadas, Bruno

    2016-07-01

    Proteomic approaches are extremely valuable in many fields of research, where mass spectrometry methods have gained an increasing interest, especially because of the ability to perform quantitative analysis. Nonetheless, sample preparation prior to mass spectrometry analysis is of the utmost importance. In this work, two protein precipitation approaches, widely used for cleaning and concentrating protein samples, were tested and compared in very diluted samples solubilized in a strong buffer (containing SDS). The amount of protein recovered after acetone and TCA/acetone precipitation was assessed, as well as the protein identification and relative quantification by SWATH-MS yields were compared with the results from the same sample without precipitation. From this study, it was possible to conclude that in the case of diluted samples in denaturing buffers, the use of cold acetone as precipitation protocol is more favourable than the use of TCA/acetone in terms of reproducibility in protein recovery and number of identified and quantified proteins. Furthermore, the reproducibility in relative quantification of the proteins is even higher in samples precipitated with acetone compared with the original sample. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. A unified approach to the Darwin approximation

    International Nuclear Information System (INIS)

    Krause, Todd B.; Apte, A.; Morrison, P. J.

    2007-01-01

    There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting

  9. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  10. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  11. Certification of Trace Elements and Methyl Mercury Mass Fractions in IAEA-470 Oyster Sample

    International Nuclear Information System (INIS)

    2016-01-01

    This publication describes the production of the IAEA-470 certified reference material, which was produced following ISO Guide 34:2009, General Requirements for the Competence of Reference Materials Producers. A sample of approximately 10 kg of dried oysters was taken from oysters collected, dissected and freeze-dried by the Korean Ocean Research and Development Institute, and was further processed at the IAEA Environment Laboratories to produce a certified reference material. The sample contained certified mass fractions for arsenic, cadmium, calcium, chromium, cobalt, copper, iron, lead, magnesium, manganese, mercury, methyl mercury, rubidium, selenium, silver, sodium, strontium, vanadium and zinc. The produced vials containing the processed oyster sample were carefully capped and stored for further certification studies. Between-unit homogeneity and stability during dispatch and storage were quantified in accordance with ISO Guide 35:2006, Reference Materials - General and Statistical Principles for Certification. The material was characterized by laboratories with demonstrated competence and adhering to ISO/IEC 17025:2005. Uncertainties of the certified values were calculated in compliance with the guide to the Expression of Uncerdainty in Measurement (JCGM 100:2008), including uncertainty associated with heterogeneity and instability of the material, and with the characterization itself. The material is intended for the quality control and assessment of method performance. As with any reference material, it can also be used for control charts or validation studies

  12. New realisation of Preisach model using adaptive polynomial approximation

    Science.gov (United States)

    Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young

    2012-09-01

    Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.

  13. Noise events in the contained event sample of Soudan 2

    International Nuclear Information System (INIS)

    Lopez, A.E.; Goodman, M.C.

    1992-01-01

    Various types of noise have been found during the scanning of contained events from the Soudan 2 detector. Although most of the noise have been given names, the causes of only certain classes of noise have been identified. From the noise mentioned here, zens and end wire noise are probably understood, but the explanations of three others - ''breakdown gemini'', ''67 noise'', and ''split zens'' - have yet to be found. Despite lack of explanations for some noise, there are still a number of ways to recognize them either by appearance or position in the detector, both of which are being used to further assist in their identification

  14. Transport container storage. Pt. 2

    International Nuclear Information System (INIS)

    Guenther, B.; Kuehn, H.D.; Schulz, E.

    1987-01-01

    In connection with mandatory licensing procedures and in the framework of quality control for serially produced containers from spheroidal graphite cast iron of quality grade GGG 40, destined to be used in the transport and storage of radioactive materials, each prototype and each production sample of a design is subjected to comprehensive destructive and non-destructive material tests. The data obtained are needed on the one hand to check whether specified, representative material characteristics are observed; on the other hand they are systematically evaluated to update knowledge and technical standards. The Federal Institute of Materials Research and Testing (BAM) has so far examined 528 individual containers (513 production samples and 15 prototypes) of wall thicknesses from 80 millimetres to 500 millimetres in this connection. It has turned out that the measures for quality assurance and quality control as substantiated by a concept of expertise definitely confirm the validity of component test results for production samples. (orig.) [de

  15.  Higher Order Improvements for Approximate Estimators

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Salanié, Bernard

    Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...

  16. Absorption and enhancement corrections using XRF analysis of some chemical samples

    International Nuclear Information System (INIS)

    Falih, Arwa Gaddal

    1996-06-01

    In this work samples containing Cr, Fe and Ni salts invarying ratios were prepared so as to represent approximately the concentrations of these elements in naturally occurring ore samples. These samples were then analyzed by EDXRF spectrometer system and the inter element effects (absorption and enhancement) were evaluated by means of two method: by using AXIL-QXAS software to calculate the effects and by the emission-transmission method to experimentally determine the same effects. The results obtained were compared and a discrepancy in the absorption results was observed. The discrepancy was attributed to the fact that the absorption in the two methods was calculated in different manners, i.e. in the emission-transmission method the absorption factor was calculated by adding different absorption terms by what is known as the additive law, but in the software program it was calculated from the scattered peaks method which does not obey this law. It was concluded that the program should be modified by inserting the emission-transmission method in the software program to calculate the absorption. Quality assurance of the data was performed though the analysis of the standard alloys obtained from the International Atomic Energy Agency (IAEA). (Author)

  17. Canonical sampling of a lattice gas

    International Nuclear Information System (INIS)

    Mueller, W.F.

    1997-01-01

    It is shown that a sampling algorithm, recently proposed in conjunction with a lattice-gas model of nuclear fragmentation, samples the canonical ensemble only in an approximate fashion. A residual weight factor has to be taken into account to calculate correct thermodynamic averages. Then, however, the algorithm is numerically inefficient. copyright 1997 The American Physical Society

  18. Exact and approximate multiple diffraction calculations

    International Nuclear Information System (INIS)

    Alexander, Y.; Wallace, S.J.; Sparrow, D.A.

    1976-08-01

    A three-body potential scattering problem is solved in the fixed scatterer model exactly and approximately to test the validity of commonly used assumptions of multiple scattering calculations. The model problem involves two-body amplitudes that show diffraction-like differential scattering similar to high energy hadron-nucleon amplitudes. The exact fixed scatterer calculations are compared to Glauber approximation, eikonal-expansion results and a noneikonal approximation

  19. On Covering Approximation Subspaces

    Directory of Open Access Journals (Sweden)

    Xun Ge

    2009-06-01

    Full Text Available Let (U';C' be a subspace of a covering approximation space (U;C and X⊂U'. In this paper, we show that and B'(X⊂B(X∩U'. Also, iff (U;C has Property Multiplication. Furthermore, some connections between outer (resp. inner definable subsets in (U;C and outer (resp. inner definable subsets in (U';C' are established. These results answer a question on covering approximation subspace posed by J. Li, and are helpful to obtain further applications of Pawlak rough set theory in pattern recognition and artificial intelligence.

  20. Public Use Microdata Samples (PUMS)

    Data.gov (United States)

    National Aeronautics and Space Administration — Public Use Microdata Samples (PUMS) are computer-accessible files containing records for a sample of housing units, with information on the characteristics of each...

  1. Analysis of IFR samples at ANL-E

    International Nuclear Information System (INIS)

    Bowers, D.L.; Sabau, C.S.

    1993-01-01

    The Analytical Chemistry Laboratory analyzes a variety of samples submitted by the different research groups within IFR. This talk describes the analytical work on samples generated by the Plutonium Electrorefiner, Large Scale Electrorefiner and Waste Treatment Studies. The majority of these samples contain Transuranics and necessitate facilities that safely contain these radioisotopes. Details such as: sample receiving, dissolution techniques, chemical separations, Instrumentation used, reporting of results are discussed. The Importance of Interactions between customer and analytical personnel Is also demonstrated

  2. Determination of uptake kinetics (sampling rates) by lipid-containing semipermeable membrane devices (SPMDs) for polycyclic aromatic hydrocarbons (PAHs) in water

    Science.gov (United States)

    Huckins, J.N.; Petty, J.D.; Orazio, C.E.; Lebo, J.A.; Clark, R.C.; Gibson, V.L.; Gala, W.R.; Echols, K.R.

    1999-01-01

    The use of lipid-containing semipermeable membrane devices (SPMDs) is becoming commonplace, but very little sampling rate data are available for the estimation of ambient contaminant concentrations from analyte levels in exposed SPMDs. We determined the aqueous sampling rates (R(s)s; expressed as effective volumes of water extracted daily) of the standard (commercially available design) 1-g triolein SPMD for 15 of the priority pollutant (PP) polycyclic aromatic hydrocarbons (PAHs) at multiple temperatures and concentrations. Under the experimental conditions of this study, recovery- corrected R(s) values for PP PAHs ranged from ???1.0 to 8.0 L/d. These values would be expected to be influenced by significant changes (relative to this study) in water temperature, degree of biofouling, and current velocity- turbulence. Included in this paper is a discussion of the effects of temperature and octanol-water partition coefficient (K(ow)); the impacts of biofouling and hydrodynamics are reported separately. Overall, SPMDs responded proportionally to aqueous PAH concentrations; i.e., SPMD R(s) values and SPMD-water concentration factors were independent of aqueous concentrations. Temperature effects (10, 18, and 26 ??C) on Rs values appeared to be complex but were relatively small.The use of lipid-containing semipermeable membrane devices (SPMDs) is becoming commonplace, but very little sampling rate data are available for the estimation of ambient contaminant concentrations from analyte levels in exposed SPMDs. We determined the aqueous sampling rates (Rss; expressed as effective volumes of water extracted daily) of the standard (commercially available design) 1-g triolein SPMD for 15 of the priority pollutant (PP) polycyclic aromatic hydrocarbons (PAHs) at multiple temperatures and concentrations. Under the experimental conditions of this study, recovery-corrected Rs values for PP PAHs ranged from ???1.0 to 8.0 L/d. These values would be expected to be influenced by

  3. Corrosion behavior of austenitic stainless steel containing Ti

    International Nuclear Information System (INIS)

    Cha, Sueng Ok; Choe, Han Cheol; Kim, Kwan Hyu

    1998-01-01

    Corrosion behavior of austenitic stainless steel containing Ti has been studied by using electrochemical techniques. The samples containing Ti from 0.1 to 1.0 wt% were solutionized at 1050 .deg. C for 1hr and then sensitized at 650 .deg. C for 5hr under argon atmosphere. Microstructure and phase analysis of the samples after heat treatment and corrosion tests were carried out by using XRD. TEM, SEM and optical microscope. The amount of δ-ferrite and TiC precipitates in matrix increased as the Ti content increased. In the sensitized samples, Cr 23 C 6 precipitates were observed at γ/δ interface. Degree Of Sensitization(DOS) was lower than 1.0 in all of the solutionized samples and the sensitized samples of Ti content above 0.4% wt% whereas the sensitized samples of Ti content lower than 0.4 wt% showed DOS higher than 1.0. Intergranular attack appeared mainly at grain boundaries in the sensitized sample containing 0.1 wt% Ti and at the γ/δ interface of the higher Ti content. In the latter, however, the attack was not so severe. Pitting potential(E pit ) and repassivation potential(E rep ) of the solutionized and sensitized samples were increased with increasing Ti content. The number and size of the pits decreased with increasing Ti content in the sensitized samples. The pits nucleated at Cr 23 C 6 site and the γ/δ interface

  4. A simple semi-empirical approximation for bond energy

    International Nuclear Information System (INIS)

    Jorge, F.E.; Giambiagi, M.; Giambiagi, M.S. de.

    1985-01-01

    A simple semi-empirical expression for bond energy, related with a generalized bond index, is proposed and applied within the IEH framework. The correlation with experimental data is good for the intermolecular bond energy of base pairs of nucleic acids and other hydrogen bonded systems. The intramolecular bond energies for a sample of molecules containing typical bonds and for hydrides are discussed. The results are compared with those obtained by other methods. (Author) [pt

  5. Global sensitivity analysis using low-rank tensor approximations

    International Nuclear Information System (INIS)

    Konakli, Katerina; Sudret, Bruno

    2016-01-01

    In the context of global sensitivity analysis, the Sobol' indices constitute a powerful tool for assessing the relative significance of the uncertain input parameters of a model. We herein introduce a novel approach for evaluating these indices at low computational cost, by post-processing the coefficients of polynomial meta-models belonging to the class of low-rank tensor approximations. Meta-models of this class can be particularly efficient in representing responses of high-dimensional models, because the number of unknowns in their general functional form grows only linearly with the input dimension. The proposed approach is validated in example applications, where the Sobol' indices derived from the meta-model coefficients are compared to reference indices, the latter obtained by exact analytical solutions or Monte-Carlo simulation with extremely large samples. Moreover, low-rank tensor approximations are confronted to the popular polynomial chaos expansion meta-models in case studies that involve analytical rank-one functions and finite-element models pertinent to structural mechanics and heat conduction. In the examined applications, indices based on the novel approach tend to converge faster to the reference solution with increasing size of the experimental design used to build the meta-model. - Highlights: • A new method is proposed for global sensitivity analysis of high-dimensional models. • Low-rank tensor approximations (LRA) are used as a meta-modeling technique. • Analytical formulas for the Sobol' indices in terms of LRA coefficients are derived. • The accuracy and efficiency of the approach is illustrated in application examples. • LRA-based indices are compared to indices based on polynomial chaos expansions.

  6. The Fourier-finite-element approximation of the lame equations in axisymmetric domains with edges

    International Nuclear Information System (INIS)

    Nkemzil, Boniface

    2003-10-01

    This paper is concerned with a priori error estimates and convergence analysis of the Fourier-finite-element solutions of the Neumann problem for the Lame equations in axisymmetric domains Ω-circumflex is contained in R 3 with reentrant edges. The Fourier-FEM combines the approximating Fourier method with respect to the rotational angle using trigonometric polynomials of degree N (N →∞), with the finite-element method on the plane meridian domain of Ω-circumflex with mesh size h (h → 0) for approximating the Fourier coefficients. The asymptotic behavior of the solution near reentrant edges is described by singular functions in non-tensor product form and treated numerically by means of finite element method on locally graded meshes. For the right-hand side f-circumflex is an element of (L 2 (Ω-circumflex)) 3 , it is proved that the rate of convergence of the combined approximations in the norms of (W 2 1 (Ω-circumflex)) 3 is of the order O(h 2-l +N -(2-l) ) (l=0,1). (author)

  7. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-01-01

    Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.

  8. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  9. Gaseous radiocarbon measurements of small samples

    International Nuclear Information System (INIS)

    Ruff, M.; Szidat, S.; Gaeggeler, H.W.; Suter, M.; Synal, H.-A.; Wacker, L.

    2010-01-01

    Radiocarbon dating by means of accelerator mass spectrometry (AMS) is a well-established method for samples containing carbon in the milligram range. However, the measurement of small samples containing less than 50 μg carbon often fails. It is difficult to graphitise these samples and the preparation is prone to contamination. To avoid graphitisation, a solution can be the direct measurement of carbon dioxide. The MICADAS, the smallest accelerator for radiocarbon dating in Zurich, is equipped with a hybrid Cs sputter ion source. It allows the measurement of both, graphite targets and gaseous CO 2 samples, without any rebuilding. This work presents experiences dealing with small samples containing 1-40 μg carbon. 500 unknown samples of different environmental research fields have been measured yet. Most of the samples were measured with the gas ion source. These data are compared with earlier measurements of small graphite samples. The performance of the two different techniques is discussed and main contributions to the blank determined. An analysis of blank and standard data measured within years allowed a quantification of the contamination, which was found to be of the order of 55 ng and 750 ng carbon (50 pMC) for the gaseous and the graphite samples, respectively. For quality control, a number of certified standards were measured using the gas ion source to demonstrate reliability of the data.

  10. Comparison of the Born series and rational approximants in potential scattering. [Pade approximants, Yikawa and exponential potential

    Energy Technology Data Exchange (ETDEWEB)

    Garibotti, C R; Grinstein, F F [Rosario Univ. Nacional (Argentina). Facultad de Ciencias Exactas e Ingenieria

    1976-05-08

    It is discussed the real utility of Born series for the calculation of atomic collision processes in the Born approximation. It is suggested to make use of Pade approximants and it is shown that this approach provides very fast convergent sequences over all the energy range studied. Yukawa and exponential potential are explicitly considered and the results are compared with high-order Born approximation.

  11. Spherical Approximation on Unit Sphere

    Directory of Open Access Journals (Sweden)

    Eman Samir Bhaya

    2018-01-01

    Full Text Available In this paper we introduce a Jackson type theorem for functions in LP spaces on sphere And study on best approximation of  functions in  spaces defined on unit sphere. our central problem is to describe the approximation behavior of functions in    spaces for  by modulus of smoothness of functions.

  12. Investigation of Coating Performance of UV-Curable Hybrid Polymers Containing 1H,1H,2H,2H-Perfluorooctyltriethoxysilane Coated on Aluminum Substrates

    Directory of Open Access Journals (Sweden)

    Mustafa Çakır

    2017-03-01

    Full Text Available This study describes preparation and characterization of fluorine-containing organic-inorganic hybrid coatings. The organic part consists of bisphenol-A glycerolate (1 glycerol/phenol diacrylate resin and 1,6-hexanediol diacrylate reactive diluent. The inorganically rich part comprises trimethoxysilane-terminated urethane, 1H,1H,2H,2H-perfluorooctyltriethoxysilane, 3-(trimethoxysilyl propyl methacrylate and sol–gel precursors that are products of hydrolysis and condensation reactions. Bisphenol-A glycerolate (1 glycerol/phenol diacrylate resin was added to the inorganic part in predetermined amounts. The resultant mixture was utilized in the preparation of free films as well as coatings on aluminum substrates. Thermal and mechanical tests such as DSC, thermo-gravimetric analysis (TGA, and tensile and shore D hardness tests were performed on free films. Water contact angle, gloss, Taber abrasion test, cross-cut and tubular impact tests were conducted on the coated samples. SEM examination and EDS analysis was performed on the fractured surfaces of free films. The hybrid coatings on the aluminum sheets gave rise to properties such as moderately glossed surface; low wear rate and hydrophobicity. Tensile strength of free films increased with up to 10% inorganic content in the hybrid structure and this increase was approximately three times that of the control sample. As expected; the % strain value decreased by 17.3 with the increase in inorganic content and elastic modulus values increased by a factor of approximately 6. Resistance to ketone-based solvents was proven and an increase in hardness was observed as the ratio of the inorganic part increased. Samples which contain 10% sol–gel content were observed to provide optimal properties.

  13. Development of Sausages Containing Mechanically Deboned Chicken Meat Hydrolysates.

    Science.gov (United States)

    Jin, S K; Choi, J S; Choi, Y J; Lee, S J; Lee, S Y; Hur, S J

    2015-07-01

    Pork meat sausages were prepared using protein hydrolysates from mechanically deboned chicken meat (MDCM). In terms of the color, compared to the controls before and after storage, the redness (a*) was significantly higher in sausages containing MDCM hydrolysates, ascorbate, and sodium erythorbate. After storage, compared to the other sausage samples, the yellowness (b*) was lower in the sausages containing ascorbate and sodium erythorbate. TBARS was not significantly different among the sausage samples before storage, whereas TBARS and DPPH radical scavenging activities were significantly higher in the sausagescontainingascorbate and sodium erythorbate, compared to the other sausage samples after 4 wk of storage. In terms of sensory evaluation, the color was significantly higher in the sausages containing MDCM hydrolysates, ascorbate, and sodium erythorbate, compared to the other sausage samples after 4 wk of storage. The "off-flavor" and overall acceptability were significantly lower in the sausages containing MDCM hydrolysates than in the other sausage samples. In most of the developed countries, meat from spent laying hens is not consumed, leading toan urgent need for effectively utilization or disposal methods. In this study, sausages were prepared using spent laying hens and protein hydrolysates from mechanically deboned chicken meat. Sausage can be made by spent laying hens hydrolysates, although overall acceptability was lower than those of other sausage samples. © 2015 Institute of Food Technologists®

  14. Integrated sampling and analysis plan for samples measuring >10 mrem/hour

    International Nuclear Information System (INIS)

    Haller, C.S.

    1992-03-01

    This integrated sampling and analysis plan was prepared to assist in planning and scheduling of Hanford Site sampling and analytical activities for all waste characterization samples that measure greater than 10 mrem/hour. This report also satisfies the requirements of the renegotiated Interim Milestone M-10-05 of the Hanford Federal Facility Agreement and Consent Order (the Tri-Party Agreement). For purposes of comparing the various analytical needs with the Hanford Site laboratory capabilities, the analytical requirements of the various programs were normalized by converting required laboratory effort for each type of sample to a common unit of work, the standard analytical equivalency unit (AEU). The AEU approximates the amount of laboratory resources required to perform an extensive suite of analyses on five core segments individually plus one additional suite of analyses on a composite sample derived from a mixture of the five core segments and prepare a validated RCRA-type data package

  15. Analysis of corrections to the eikonal approximation

    Science.gov (United States)

    Hebborn, C.; Capel, P.

    2017-11-01

    Various corrections to the eikonal approximations are studied for two- and three-body nuclear collisions with the goal to extend the range of validity of this approximation to beam energies of 10 MeV/nucleon. Wallace's correction does not improve much the elastic-scattering cross sections obtained at the usual eikonal approximation. On the contrary, a semiclassical approximation that substitutes the impact parameter by a complex distance of closest approach computed with the projectile-target optical potential efficiently corrects the eikonal approximation. This opens the possibility to analyze data measured down to 10 MeV/nucleon within eikonal-like reaction models.

  16. Containment and recovery of a light non-aqueous phase liquid plume at a woodtreating facility

    International Nuclear Information System (INIS)

    Crouse, D.; Powell, G.; Hawthorn, S.; Weinstock, S.

    1997-01-01

    A woodtreating site in Montana used a formulation (product) of 5 percent pentachlorophenol and 95 percent diesel fuel as a carrier liquid to pressure treat lumber. Through years of operations approximately 378,500 liters of this light non-aqueous phase liquid (LNAPL) product spilled onto the ground and soaked into the groundwater. A plume of this LNAPL product flowed in a northerly direction toward a stream located approximately 410 meters from the pressure treatment building. A 271-meter long high density polyethylene (HDPE) containment cutoff barrier wall was installed 15 meters from the stream to capture, contain, and prevent the product from migrating off site. This barrier was extended to a depth of 3.7 meters below ground surface and allowed the groundwater to flow beneath it. Ten product recovery wells, each with a dual-phase pumping system, were installed within the plume, and a groundwater model was completed to indicate how the plume would be contained by generating a cone of influence at each recovery well. The model indicated that the recovery wells and cutoff barrier wall would contain the plume and prevent further migration. To date, nearly 3 1/2 year's later, approximately 106,000 liters of product have been recovered

  17. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm......In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...

  18. Measuring the critical current in superconducting samples made of NT-50 under pulse irradiation by high-energy particles

    International Nuclear Information System (INIS)

    Vasilev, P.G.; Vladimirova, N.M.; Volkov, V.I.; Goncharov, I.N.; Zajtsev, L.N.; Zel'dich, B.D.; Ivanov, V.I.; Kleshchenko, E.D.; Khvostov, V.B.

    1981-01-01

    The results of tests of superconducting samples of an uninsulated wire of the 0.5 mm diameter, containing 1045 superconducting filaments of the 10 μm diameter made of NT-50 superconductor in a copper matrix, are given. The upper part of the sample (''closed'') is placed between two glass-cloth-base laminate plates of the 50 mm length, and the lower part (''open'') of the 45 mm length is immerged into liquid helium. The sample is located perpendicular to the magnetic field of a superconducting solenoid and it is irradiated by charged particle beams at the energy of several GeV. The measurement results of permissible energy release in the sample depending on subcriticality (I/Isub(c) where I is an operating current through the sample, and Isub(c) is a critical current for lack of the beam) and the particle flux density, as well as of the maximum permissible fluence depending on subcriticality. In case of the ''closed'' sample irradiated by short pulses (approximately 1 ms) for I/Isub(c) [ru

  19. Characteristics of Mae Moh lignite: Hardgrove grindability index and approximate work index

    Directory of Open Access Journals (Sweden)

    Wutthiphong Tara

    2012-02-01

    Full Text Available The purpose of this research was to preliminarily study the Mae Moh lignite grindability tests emphasizing onHardgrove grindability and approximate work index determination respectively. Firstly, the lignite samples were collected,prepared and analyzed for calorific value, total sulfur content, and proximate analysis. After that the Hardgrove grindabilitytest using ball-race test mill was performed. Knowing the Hardgrove indices, the Bond work indices of some samples wereestimated using the Aplan’s formula. The approximate work indices were determined by running a batch dry-grinding testusing a laboratory ball mill. Finally, the work indices obtained from both methods were compared. It was found that allsamples could be ranked as lignite B, using the heating value as criteria, if the content of mineral matter is neglected. Similarly,all samples can be classified as lignite with the Hargrove grindability indices ranging from about 40 to 50. However, there isa significant difference in the work indices derived from Hardgrove and simplified Bond grindability tests. This may be due todifference in variability of lignite properties and the test procedures. To obtain more accurate values of the lignite workindex, the time-consuming Bond procedure should be performed with a number of corrections for different milling conditions.With Hardgrove grindability indices and the work indices calculated from Aplan’s formula, capacity of the roller-racepulverizer and grindability of the Mae Moh lignite should be investigated in detail further.

  20. Ancilla-approximable quantum state transformations

    International Nuclear Information System (INIS)

    Blass, Andreas; Gurevich, Yuri

    2015-01-01

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation

  1. Ancilla-approximable quantum state transformations

    Energy Technology Data Exchange (ETDEWEB)

    Blass, Andreas [Department of Mathematics, University of Michigan, Ann Arbor, Michigan 48109 (United States); Gurevich, Yuri [Microsoft Research, Redmond, Washington 98052 (United States)

    2015-04-15

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.

  2. Procedures for sampling and sample-reduction within quality assurance systems for solid biofuels

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-04-15

    The bias introduced when sampling solid biofuels from stockpiles or containers instead of from moving streams is assessed as well as the number and size of samples required to represent accurately the bulk sample, variations introduced when reducing bulk samples into samples for testing, and the usefulness of sample reduction methods. Details are given of the experimental work carried out in Sweden and Denmark using sawdust, wood chips, wood pellets, forestry residues and straw. The production of a model European Standard for quality assurance of solid biofuels is examined.

  3. Trees Containing Built-In Pulping Catalysts - Final Report - 08/18/1997 - 08/18/2000

    Energy Technology Data Exchange (ETDEWEB)

    Pullman, G.; Dimmel, D.; Peter, G.

    2000-08-18

    Several hardwood and softwood trees were analyzed for the presence of anthraquinone-type molecules. Low levels of anthraquinone (AQ) and anthrone components were detected using gas chromatography-mass spectroscopy and sensitive selected-ion monitoring techniques. Ten out of seventeen hardwood samples examined contained AQ-type components; however, the levels were typically below {approximately}6 ppm. No AQs were observed in the few softwood samples that were examined. The AQs were more concentrated in the heartwood of teak than in the sapwood. The delignification of pine was enhanced by the addition of teak chips ({approximately}0.7% AQ-equivalence content) to the cook, suggesting that endogenous AQs can be released from wood during pulping and can catalyze delignification reactions. Eastern cottonwood contained AQ, methyl AQ, and dimethyl AQ, all useful for wood pulping. This is the first time unsubstituted AQ has been observed in wood extracts. Due to the presence of these pulping catalysts, rapid growth rates in plantation settings, and the ease of genetic transformation, eastern cottonwood is a suitable candidate for genetic engineering studies to enhance AQ content. To achieve effective catalytic pulping activity, poplar and cottonwood, respectively, require {approximately}100 and 1000 times more for pulping catalysts. A strategy to increase AQ concentration in natural wood was developed and is currently being tested. This strategy involves ''turning up'' isochorismate synthase (ICS) through genetic engineering. Isochorismate synthase is the first enzyme in the AQ pathway branching from the shikimic acid pathway. In general, the level of enzyme activity at the first branch point or committed step controls the flux through a biosynthetic pathway. To test if the level of ICS regulates AQ biosynthesis in plant tissues, we proposed to over-express this synthase in plant cells. A partial cDNA encoding a putative ICS was available from the random

  4. Recognition of computerized facial approximations by familiar assessors.

    Science.gov (United States)

    Richard, Adam H; Monson, Keith L

    2017-11-01

    Studies testing the effectiveness of facial approximations typically involve groups of participants who are unfamiliar with the approximated individual(s). This limitation requires the use of photograph arrays including a picture of the subject for comparison to the facial approximation. While this practice is often necessary due to the difficulty in obtaining a group of assessors who are familiar with the approximated subject, it may not accurately simulate the thought process of the target audience (friends and family members) in comparing a mental image of the approximated subject to the facial approximation. As part of a larger process to evaluate the effectiveness and best implementation of the ReFace facial approximation software program, the rare opportunity arose to conduct a recognition study using assessors who were personally acquainted with the subjects of the approximations. ReFace facial approximations were generated based on preexisting medical scans, and co-workers of the scan donors were tested on whether they could accurately pick out the approximation of their colleague from arrays of facial approximations. Results from the study demonstrated an overall poor recognition performance (i.e., where a single choice within a pool is not enforced) for individuals who were familiar with the approximated subjects. Out of 220 recognition tests only 10.5% resulted in the assessor selecting the correct approximation (or correctly choosing not to make a selection when the array consisted only of foils), an outcome that was not significantly different from the 9% random chance rate. When allowed to select multiple approximations the assessors felt resembled the target individual, the overall sensitivity for ReFace approximations was 16.0% and the overall specificity was 81.8%. These results differ markedly from the results of a previous study using assessors who were unfamiliar with the approximated subjects. Some possible explanations for this disparity in

  5. A revised radiation package of G-packed McICA and two-stream approximation: Performance evaluation in a global weather forecasting model

    Science.gov (United States)

    Baek, Sunghye

    2017-07-01

    For more efficient and accurate computation of radiative flux, improvements have been achieved in two aspects, integration of the radiative transfer equation over space and angle. First, the treatment of the Monte Carlo-independent column approximation (MCICA) is modified focusing on efficiency using a reduced number of random samples ("G-packed") within a reconstructed and unified radiation package. The original McICA takes 20% of CPU time of radiation in the Global/Regional Integrated Model systems (GRIMs). The CPU time consumption of McICA is reduced by 70% without compromising accuracy. Second, parameterizations of shortwave two-stream approximations are revised to reduce errors with respect to the 16-stream discrete ordinate method. Delta-scaled two-stream approximation (TSA) is almost unanimously used in Global Circulation Model (GCM) but contains systematic errors which overestimate forward peak scattering as solar elevation decreases. These errors are alleviated by adjusting the parameterizations of each scattering element—aerosol, liquid, ice and snow cloud particles. Parameterizations are determined with 20,129 atmospheric columns of the GRIMs data and tested with 13,422 independent data columns. The result shows that the root-mean-square error (RMSE) over the all atmospheric layers is decreased by 39% on average without significant increase in computational time. Revised TSA developed and validated with a separate one-dimensional model is mounted on GRIMs for mid-term numerical weather forecasting. Monthly averaged global forecast skill scores are unchanged with revised TSA but the temperature at lower levels of the atmosphere (pressure ≥ 700 hPa) is slightly increased (< 0.5 K) with corrected atmospheric absorption.

  6. Multilevel Approximations of Markovian Jump Processes with Applications in Communication Networks

    KAUST Repository

    Vilanova, Pedro

    2015-05-04

    This thesis focuses on the development and analysis of efficient simulation and inference techniques for Markovian pure jump processes with a view towards applications in dense communication networks. These techniques are especially relevant for modeling networks of smart devices —tiny, abundant microprocessors with integrated sensors and wireless communication abilities— that form highly complex and diverse communication networks. During 2010, the number of devices connected to the Internet exceeded the number of people on Earth: over 12.5 billion devices. By 2015, Cisco’s Internet Business Solutions Group predicts that this number will exceed 25 billion. The first part of this work proposes novel numerical methods to estimate, in an efficient and accurate way, observables from realizations of Markovian jump processes. In particular, hybrid Monte Carlo type methods are developed that combine the exact and approximate simulation algorithms to exploit their respective advantages. These methods are tailored to keep a global computational error below a prescribed global error tolerance and within a given statistical confidence level. Indeed, the computational work of these methods is similar to the one of an exact method, but with a smaller constant. Finally, the methods are extended to systems with a disparity of time scales. The second part develops novel inference methods to estimate the parameters of Markovian pure jump process. First, an indirect inference approach is presented, which is based on upscaled representations and does not require sampling. This method is simpler than dealing directly with the likelihood of the process, which, in general, cannot be expressed in closed form and whose maximization requires computationally intensive sampling techniques. Second, a forward-reverse Monte Carlo Expectation-Maximization algorithm is provided to approximate a local maximum or saddle point of the likelihood function of the parameters given a set of

  7. Development of nodal interface conditions for a PN approximation nodal model

    International Nuclear Information System (INIS)

    Feiz, M.

    1993-01-01

    A relation was developed for approximating higher order odd-moments from lower order odd-moments at the nodal interfaces of a Legendre polynomial nodal model. Two sample problems were tested using different order P N expansions in adjacent nodes. The developed relation proved to be adequate and matched the nodal interface flux accurately. The development allows the use of different order expansions in adjacent nodes, and will be used in a hybrid diffusion-transport nodal model. (author)

  8. Approximating The DCM

    DEFF Research Database (Denmark)

    Madsen, Rasmus Elsborg

    2005-01-01

    The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM...

  9. Turbomolecular Pumps for Holding Gases in Open Containers

    Science.gov (United States)

    Keller, John W.; Lorenz, John E.

    2010-01-01

    Proposed special-purpose turbomolecular pumps denoted turbotraps would be designed, along with mating open containers, to prevent the escape of relatively slowly (thermal) moving gas molecules from the containers while allowing atoms moving at much greater speeds to pass through. In the original intended applications, the containers would be electron-attachment cells, and the contained gases would be vapors of alkali metal atoms moving at thermal speeds that would be of the order of a fraction of 300 meters per second. These cells would be parts of apparatuses used to measure fluxes of neutral atoms incident at kinetic energies in the approximate range of 10 eV to 10 keV (corresponding to typical speeds of the order of 40,000 m/s and higher). The incident energetic neutral atoms would pass through the cells, wherein charge-exchange reactions with the alkali metal atoms would convert the neutral atoms to negative ions, which, in turn, could then be analyzed by use of conventional charged-particle optics.

  10. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  11. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  12. HCUP National (Nationwide) Inpatient Sample (NIS) - Restricted Access File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The NIS is the largest publicly available all-payer inpatient care database in the United States. It contains data from approximately 8 million hospital stays each...

  13. Polysaccharide-Based Edible Coatings Containing Cellulase for Improved Preservation of Meat Quality during Storage.

    Science.gov (United States)

    Zimoch-Korzycka, Anna; Jarmoluk, Andrzej

    2017-03-02

    The objectives of this study were to optimize the composition of edible food coatings and to extend the shelf-life of pork meat. Initially, nine meat samples were coated with solutions containing chitosan and hydroxypropyl methylcellulose at various cellulase concentrations: 0%, 0.05%, and 0.1%, stored for 0, 7, and 14 days. Uncoated meat served as the controls. The samples were tested for pH, water activity (a w ), total number of microorganisms (TNM), psychrotrophs (P), number of yeast and molds (NYM), colour, and thiobarbituric acid-reactive substances (TBARS). The pH and a w values varied from 5.42 to 5.54 and 0.919 to 0.926, respectively. The reductions in the TNM, P, and NYM after 14 days of storage were approximately 2.71 log cycles, 1.46 log cycles, and 0.78 log cycles, respectively. The enzyme addition improved the stability of the red colour. Significant reduction in TBARS was noted with the inclusion of cellulase in the coating material. Overall, this study provides a promising alternative method for the preservation of pork meat in industry.

  14. Polysaccharide-Based Edible Coatings Containing Cellulase for Improved Preservation of Meat Quality during Storage

    Directory of Open Access Journals (Sweden)

    Anna Zimoch-Korzycka

    2017-03-01

    Full Text Available The objectives of this study were to optimize the composition of edible food coatings and to extend the shelf-life of pork meat. Initially, nine meat samples were coated with solutions containing chitosan and hydroxypropyl methylcellulose at various cellulase concentrations: 0%, 0.05%, and 0.1%, stored for 0, 7, and 14 days. Uncoated meat served as the controls. The samples were tested for pH, water activity (aw, total number of microorganisms (TNM, psychrotrophs (P, number of yeast and molds (NYM, colour, and thiobarbituric acid-reactive substances (TBARS. The pH and aw values varied from 5.42 to 5.54 and 0.919 to 0.926, respectively. The reductions in the TNM, P, and NYM after 14 days of storage were approximately 2.71 log cycles, 1.46 log cycles, and 0.78 log cycles, respectively. The enzyme addition improved the stability of the red colour. Significant reduction in TBARS was noted with the inclusion of cellulase in the coating material. Overall, this study provides a promising alternative method for the preservation of pork meat in industry.

  15. An approximation for kanban controlled assembly systems

    NARCIS (Netherlands)

    Topan, E.; Avsar, Z.M.

    2011-01-01

    An approximation is proposed to evaluate the steady-state performance of kanban controlled two-stage assembly systems. The development of the approximation is as follows. The considered continuous-time Markov chain is aggregated keeping the model exact, and this aggregate model is approximated

  16. Contamination profiles of short-chain polychlorinated n-alkanes in foodstuff samples from Japan

    Energy Technology Data Exchange (ETDEWEB)

    Matsukami, Hidenori; Kurunthachalam, S; Ohi, Etsumasa; Takasuga, Takumi [Shimadzu Techno Research, Inc., Kyoto (Japan); Iino, Fukuya; Nakanishi, Junko [National Inst. of Advanced Industrial Science and Technology, Tsukuba (Japan)

    2004-09-15

    Polychlorinated n-alkanes (PCAs) are group of chemicals manufactured by chlorination of liquid n-paraffin or paraffin wax that contain 30 to 70% chlorine by weight. Large amounts of PCAs are widely used as plasticizers for vinyl chloride, lubricants, paints, and flame retardants and number of other industrial applications. Annual global production of PCAs is approximately 300 kilo tones, with a majority having medium-carbon-chain (C14-C19) length. According to the investigation made by Kagaku Kogyo Nippon-Sha, the annual consumption of PCAs in Japan was about 83,000 tons in between 1986-2001. Short-carbon-chain (C10-C13) has been placed on the Priority Substance List under Canadian Environmental Protection Act and on the Environmental Protection Agency Toxic Release Inventory in the USA due to its potential to act as tumor promoters in mammals. Data on environment levels of PCAs is meager, nevertheless, PCAs have been measured at relatively high concentrations in biota from Sweden, biota, sediment from Canada and marine biota and human milk from the Canadian Arctic. In our earlier study, we reported concentrations of short-chain PCAs from sewage treatment plant (STP) collected from Tama River, Tokyo and river water and sediment from Tokyo and Osaka. STP influent water contained greater shortchain PCAs concentrations than STP effluent. In addition, some river water and sediment samples contained detectable concentrations of short-chain PCAs, which was similar to other industrial countries. However, there is no study conducted to explore the contamination profiles of short-chain PCAs in human foodstuff samples. In the present study, we analyzed eleven foodstuff samples that were purchased from various supermarkets in order to know the short-chain PCAs concentrations in the foodstuff and possible human total daily intake (TDI) amounts.

  17. Simulating the complex output of rainfall and hydrological processes using the information contained in large data sets: the Direct Sampling approach.

    Science.gov (United States)

    Oriani, Fabio

    2017-04-01

    The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model

  18. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  19. Authenticated Secure Container System (ASCS)

    International Nuclear Information System (INIS)

    1991-01-01

    Sandia National Laboratories developed an Authenticated Secure Container System (ASCS) for the International Atomic Energy Agency (IAEA). Agency standard weights and safeguards samples can be stored in the ASCS to provide continuity of knowledge. The ASCS consists of an optically clear cover, a base containing the Authenticated Item Monitoring System (AIMS) transmitter, and the AIMS receiver unit for data collection. The ASCS will provide the Inspector with information concerning the status of the system, during a surveillance period, such as state of health, tampering attempts, and movement of the container system. The secure container is located inside a Glove Box with the receiver located remotely from the Glove Box. AIMS technology uses rf transmission from the secure container to the receiver to provide a record of state of health and tampering. The data is stored in the receiver for analysis by the Inspector during a future inspection visit. 2 refs

  20. Shearlets and Optimally Sparse Approximations

    DEFF Research Database (Denmark)

    Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q

    2012-01-01

    Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....

  1. Longitudinal study of iodine in toenails following IV administration of an iodine-containing contrast agent

    International Nuclear Information System (INIS)

    Spate, V.L.; Morris, J.S.; Nichols, T.A.; Baskett, C.K.; Mason, M.M.; Horsman, T.L.; McDougall, I.R.

    1998-01-01

    The literature on the relationship between diet and thyroid cancer (TC) risk and the higher incidence of TC among Asian immigrants to the US compared to second and third generation subgroups has prompted epidemiologists to hypothesize that increased levels of iodine consumption may be associated with TC risk, particularly among persons with a history of clinical or subclinical thyroid dysfunction. At the University of Missouri Research Reactor (MURR), we have applied epiboron neutron activation analysis to investigate human nails as a dietary monitor for iodine. Preliminary studies have indicated a positive correlation between dietary iodine intake and the concentration of iodine in toenails. However, these studies are confounded by high iodine levels (up to 30 ppm) in approximately 5% of the nails studied. We hypothesize that, in the subjects we have studied, the high iodine levels may be due to iodine-containing medications, in particular contrast-agents containing iopamidol. This paper will report on longitudinal studies using contrast agent subjects who where followed-up for almost two years compared to a longitudinal control and a population mean. Based on this study, we suggest that iodine-containing contrast agents contaminate nail samples via non-specific binding in the short term followed by incorporation in the nail as a result of absorption. (author)

  2. Computable Error Estimates for Finite Element Approximations of Elliptic Partial Differential Equations with Rough Stochastic Data

    KAUST Repository

    Hall, Eric Joseph

    2016-12-08

    We derive computable error estimates for finite element approximations of linear elliptic partial differential equations with rough stochastic coefficients. In this setting, the exact solutions contain high frequency content that standard a posteriori error estimates fail to capture. We propose goal-oriented estimates, based on local error indicators, for the pathwise Galerkin and expected quadrature errors committed in standard, continuous, piecewise linear finite element approximations. Derived using easily validated assumptions, these novel estimates can be computed at a relatively low cost and have applications to subsurface flow problems in geophysics where the conductivities are assumed to have lognormal distributions with low regularity. Our theory is supported by numerical experiments on test problems in one and two dimensions.

  3. Approximate Model Checking of PCTL Involving Unbounded Path Properties

    Science.gov (United States)

    Basu, Samik; Ghosh, Arka P.; He, Ru

    We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

  4. Design aspects of automation system for initial processing of fecal samples

    International Nuclear Information System (INIS)

    Sawant, Pramilla D.; Prabhu, Supreetha P.; Suja, A.; Wankhede, Sonal; Chaudhary, Seema; Rao, D.D.; Pradeepkumar, K.S.; Das, A.P.; Badodkar, B.D.

    2014-01-01

    The procedure for initial handling of the fecal samples at Bioassay Lab., Trombay is as follows: overnight fecal samples are collected from the worker in a kit consisting of a polythene bag placed in a wide mouth polythene container closed with an inner lid and a screw cap. Occupational worker collects the sample in the polythene bag. On receiving the sample, the polythene container along with the sample is weighed, polythene bag containing fecal sample is lifted out of the container using a pair of tongs placed inside a crucible and ashed inside a muffle furnace at 450℃. After complete ashing, the crucible containing white ash is taken-up for further radiochemical processing. This paper describes the various steps in developing a prototype automated system for initial handling of fecal samples. The proposed system for handling and processing of fecal samples is proposed to automate the above. The system once developed will help eliminate manual intervention till the ashing stage and reduce the biological hazard involved in handling such samples mentioned procedure

  5. Mark I containment, short term program. Safety evaluation report

    International Nuclear Information System (INIS)

    1977-12-01

    Presented is a Safety Evaluation Report (SER) prepared by the Office of Nuclear Reactor Regulation addressing the Short Term Program (STP) reassessment of the containment systems of operating Boiler Water Reactor (BWR) facilities with the Mark I containment system design. The information presented in this SER establishes the basis for the NRC staff's conclusion that licensed Mark I BWR facilities can continue to operate safely, without undue risk to the health and safety of the public, during an interim period of approximately two years while a methodical, comprehensive Long Term Program (LTP) is conducted. This SER also provides one of the basic foundations for the NRC staff review of the Mark I containment systems for facilities not yet licensed for operation

  6. Non destructive Testing (NDT) of concrete containing hematite

    International Nuclear Information System (INIS)

    Mohamad Pauzi Ismail; Noor Azreen Masenwat; Suhairy Sani; Nasharuddin Isa; Mohamad Haniza Mahmud

    2014-01-01

    This paper described the results of Non-destructive ultrasonic and rebound hammer measurements on concrete containing hematite. Local hematite stones were used as aggregates to produce high density concrete for application in X-and gamma shielding. Concrete cube samples (150 mm x 150 mm x 150 mm) containing hematite as coarse aggregates were prepared by changing mix ratio, water to cement ratio (w/c) and types of fine aggregate. All samples were cured in water for 7 days and then tested after 28 days. Density, rebound number(N) and ultrasonic pulse velocity (UPV) of the samples were taken before compressed to failure. The measurement results are explained and discussed. (author)

  7. Anaphylaxis to gelatin-containing rectal suppositories.

    Science.gov (United States)

    Sakaguchi, M; Inouye, S

    2001-12-01

    Some children--though the number is few-have been sensitized with gelatin. To investigate the relationship between the presence of antigelatin IgE and anaphylaxis to gelatin-containing rectal suppository, we measured antigelatin IgE in the sera of the children with anaphylaxis. Ten children showed systemic allergic reactions, including anaphylaxis, to a chloral hydrate rectal suppository containing gelatin (231 mg/dose) that had been used as a sedative. These children's clinical histories and serum samples were submitted from physicians to the National Institute of Infectious Diseases during a 2-year period from 1996 to 1997. Of the 10 children, 5 showed apparent anaphylaxis, including hypotension and/or cyanosis, along with urticaria or wheezing; 2 showed both urticaria and wheezing without hypotension or cyanosis; the other 3 showed only urticaria. All of the children had antigelatin IgE (mean value +/- SD, 7.9 +/- 8.4 Ua/mL). As a control, samples from 250 randomly selected children had no antigelatin IgE. These findings suggest that the 10 children's systemic allergic reactions to this suppository were caused by the gelatin component. Gelatin-containing suppositories must be used with the same caution as gelatin-containing vaccines and other medications.

  8. The AC Stark Effect, Time-Dependent Born-Oppenheimer Approximation, and Franck-Condon Factors

    CERN Document Server

    Hagedorn, G A; Jilcott, S W

    2005-01-01

    We study the quantum mechanics of a simple molecular system that is subject to a laser pulse. We model the laser pulse by a classical oscillatory electric field, and we employ the Born--Oppenheimer approximation for the molecule. We compute transition amplitudes to leading order in the laser strength. These amplitudes contain Franck--Condon factors that we compute explicitly to leading order in the Born--Oppenheimer parameter. We also correct an erroneous calculation in the mathematical literature on the AC Stark effect for molecular systems.

  9. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  10. Improved Dutch Roll Approximation for Hypersonic Vehicle

    Directory of Open Access Journals (Sweden)

    Liang-Liang Yin

    2014-06-01

    Full Text Available An improved dutch roll approximation for hypersonic vehicle is presented. From the new approximations, the dutch roll frequency is shown to be a function of the stability axis yaw stability and the dutch roll damping is mainly effected by the roll damping ratio. In additional, an important parameter called roll-to-yaw ratio is obtained to describe the dutch roll mode. Solution shows that large-roll-to-yaw ratio is the generate character of hypersonic vehicle, which results the large error for the practical approximation. Predictions from the literal approximations derived in this paper are compared with actual numerical values for s example hypersonic vehicle, results show the approximations work well and the error is below 10 %.

  11. ANL small-sample calorimeter system design and operation

    International Nuclear Information System (INIS)

    Roche, C.T.; Perry, R.B.; Lewis, R.N.; Jung, E.A.; Haumann, J.R.

    1978-07-01

    The Small-Sample Calorimetric System is a portable instrument designed to measure the thermal power produced by radioactive decay of plutonium-containing fuels. The small-sample calorimeter is capable of measuring samples producing power up to 32 milliwatts at a rate of one sample every 20 min. The instrument is contained in two packages: a data-acquisition module consisting of a microprocessor with an 8K-byte nonvolatile memory, and a measurement module consisting of the calorimeter and a sample preheater. The total weight of the system is 18 kg

  12. Apparatus for sampling hazardous media

    International Nuclear Information System (INIS)

    Gardner, J.F.; Showalter, T.W.

    1984-01-01

    An apparatus for sampling a hazardous medium, such as radioactive or chemical waste, selectively collects a predetermined quantity of the medium in a recess of an end-over-end rotatable valving member. This collected quantity is deposited in a receiving receptacle located in a cavity while the receiving receptacle is in a sealed relationship with a recess to prevent dusting of the sampled media outside the receiving receptacle. The receiving receptacle is removably fitted within a vehicle body which is, in turn, slidably movable upon a track within a transport tube. The receiving receptacle is transported in the vehicle body from its sample receiving position within a container for the hazardous medium to a sample retrieval position outside the medium container. The receiving receptacle may then be removed from the vehicle body, capped and taken to a laboratory for chemical analysis. (author)

  13. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  14. The local quantum-mechanical stress tensor in Thomas-Fermi approximation and gradient expansion method

    International Nuclear Information System (INIS)

    Kaschner, R.; Graefenstein, J.; Ziesche, P.

    1988-12-01

    From the local momentum balance using density functional theory an expression for the local quantum-mechanical stress tensor (or stress field) σ(r) of non-relativistic Coulomb systems is found out within the Thomas-Fermi approximation and its generalizations including gradient expansion method. As an illustration the stress field σ(r) is calculated for the jellium model of the interface K-Cs, containing especially the adhesive force between the two half-space jellia. (author). 23 refs, 1 fig

  15. Filtering of sound from the Navier-Stokes equations. [An approximation for describing thermal convection in a compressible fluid

    Energy Technology Data Exchange (ETDEWEB)

    Paolucci, S.

    1982-12-01

    An approximation leading to anelastic equations capable of describing thermal convection in a compressible fluid is given. These equations are more general than the Oberbeck-Boussinesq equations and different than the standard anelastic equations in that they can be used for the computation of convection in a fluid with large density gradients present. We show that the equations do not contain acoustic waves, while at the same time they can still describe the propagation of internal waves. Throughout we show that the filtering of acoustic waves, within the limits of the approximation, does not appreciably alter the description of the physics.

  16. Recommended Immunological Strategies to Screen for Botulinum Neurotoxin-Containing Samples

    Directory of Open Access Journals (Sweden)

    Stéphanie Simon

    2015-11-01

    Full Text Available Botulinum neurotoxins (BoNTs cause the life-threatening neurological illness botulism in humans and animals and are divided into seven serotypes (BoNT/A–G, of which serotypes A, B, E, and F cause the disease in humans. BoNTs are classified as “category A” bioterrorism threat agents and are relevant in the context of the Biological Weapons Convention. An international proficiency test (PT was conducted to evaluate detection, quantification and discrimination capabilities of 23 expert laboratories from the health, food and security areas. Here we describe three immunological strategies that proved to be successful for the detection and quantification of BoNT/A, B, and E considering the restricted sample volume (1 mL distributed. To analyze the samples qualitatively and quantitatively, the first strategy was based on sensitive immunoenzymatic and immunochromatographic assays for fast qualitative and quantitative analyses. In the second approach, a bead-based suspension array was used for screening followed by conventional ELISA for quantification. In the third approach, an ELISA plate format assay was used for serotype specific immunodetection of BoNT-cleaved substrates, detecting the activity of the light chain, rather than the toxin protein. The results provide guidance for further steps in quality assurance and highlight problems to address in the future.

  17. Recommended Immunological Strategies to Screen for Botulinum Neurotoxin-Containing Samples.

    Science.gov (United States)

    Simon, Stéphanie; Fiebig, Uwe; Liu, Yvonne; Tierney, Rob; Dano, Julie; Worbs, Sylvia; Endermann, Tanja; Nevers, Marie-Claire; Volland, Hervé; Sesardic, Dorothea; Dorner, Martin B

    2015-11-26

    Botulinum neurotoxins (BoNTs) cause the life-threatening neurological illness botulism in humans and animals and are divided into seven serotypes (BoNT/A-G), of which serotypes A, B, E, and F cause the disease in humans. BoNTs are classified as "category A" bioterrorism threat agents and are relevant in the context of the Biological Weapons Convention. An international proficiency test (PT) was conducted to evaluate detection, quantification and discrimination capabilities of 23 expert laboratories from the health, food and security areas. Here we describe three immunological strategies that proved to be successful for the detection and quantification of BoNT/A, B, and E considering the restricted sample volume (1 mL) distributed. To analyze the samples qualitatively and quantitatively, the first strategy was based on sensitive immunoenzymatic and immunochromatographic assays for fast qualitative and quantitative analyses. In the second approach, a bead-based suspension array was used for screening followed by conventional ELISA for quantification. In the third approach, an ELISA plate format assay was used for serotype specific immunodetection of BoNT-cleaved substrates, detecting the activity of the light chain, rather than the toxin protein. The results provide guidance for further steps in quality assurance and highlight problems to address in the future.

  18. Users' guide to CACECO containment analysis code. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Peak, R.D.

    1979-06-01

    The CACECO containment analysis code was developed to predict the thermodynamic responses of LMFBR containment facilities to a variety of accidents. The code is included in the National Energy Software Center Library at Argonne National Laboratory as Program No. 762. This users' guide describes the CACECO code and its data input requirements. The code description covers the many mathematical models used and the approximations used in their solution. The descriptions are detailed to the extent that the user can modify the code to suit his unique needs, and, indeed, the reader is urged to consider code modification acceptable.

  19. Legendre-tau approximations for functional differential equations

    Science.gov (United States)

    Ito, K.; Teglas, R.

    1986-01-01

    The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.

  20. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  1. Local density approximations for relativistic exchange energies

    International Nuclear Information System (INIS)

    MacDonald, A.H.

    1986-01-01

    The use of local density approximations to approximate exchange interactions in relativistic electron systems is reviewed. Particular attention is paid to the physical content of these exchange energies by discussing results for the uniform relativistic electron gas from a new point of view. Work on applying these local density approximations in atoms and solids is reviewed and it is concluded that good accuracy is usually possible provided self-interaction corrections are applied. The local density approximations necessary for spin-polarized relativistic systems are discussed and some new results are presented

  2. Development of high temperature resistant geomembranes for oil sands secondary containments

    Energy Technology Data Exchange (ETDEWEB)

    Mills, A. [Layfield Environmental Systems Ltd., Edmonton, AB (Canada); Martin, D. [Layfield Geosynthetics and Industrial Fabrics Ltd., Edmonton, AB (Canada)

    2008-07-01

    Plastic liner materials are often adversely impacted by chemicals at elevated temperatures. Heat accelerates the oxidation of the polymeric chains, which in turn accelerates the degradation of the plastic. This paper discussed geomembrane containment systems placed under heated petroleum storage tanks at an oil sands processing plant. Various high temperature-resistant geomembrane materials were tested. Compatibility testing procedures for the various fluids contained by the systems were outlined. Installation procedures for the membranes were also discussed. The membrane systems were designed for use with heavy gas oil; light gas oil; and naphtha. Temperatures in the ground below the tanks were approximately 79 degrees C. Testing was done using sealed containers held in an oil bath at temperatures of 105 degrees C. Heat applied to the chemicals during the tests pressurized the test vessels. Liner materials used in the initial tests included an ester-based thermoplastic polyurethane liner; high density polyethylene (HDPE); linear low-density polyethylene (LLDPE), polypropylene (PP) olefins; polyvinyl chloride (PVC); and polyvinylidene (PVDF) materials. A second set of tests was then conducted using alloy materials and PVC. Heat stability tests demonstrated that the blue 0.75 mm alloy showed a tensile strength ratio within the industry's 15 per cent pass criteria. The samples were then tested with diluted bitumen and diluents at 65, 85 and 100 degrees C. The developed liners were installed underneath petroleum tanks with leak detection chambers. It was concluded that the geomembrane liners prevented the hot liquids from leaking. 4 refs., 8 tabs.

  3. Containers and systems for the measurement of radioactive gases and related methods

    Science.gov (United States)

    Mann, Nicholas R; Watrous, Matthew G; Oertel, Christopher P; McGrath, Christopher A

    2017-06-20

    Containers for a fluid sample containing a radionuclide for measurement of radiation from the radionuclide include an outer shell having one or more ports between an interior and an exterior of the outer shell, and an inner shell secured to the outer shell. The inner shell includes a detector receptacle sized for at least partial insertion into the outer shell. The inner shell and outer shell together at least partially define a fluid sample space. The outer shell and inner shell are configured for maintaining an operating pressure within the fluid sample space of at least about 1000 psi. Systems for measuring radioactivity in a fluid include such a container and a radiation detector received at least partially within the detector receptacle. Methods of measuring radioactivity in a fluid sample include maintaining a pressure of a fluid sample within a Marinelli-type container at least at about 1000 psi.

  4. Using machine learning to accelerate sampling-based inversion

    Science.gov (United States)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  5. Some relations between entropy and approximation numbers

    Institute of Scientific and Technical Information of China (English)

    郑志明

    1999-01-01

    A general result is obtained which relates the entropy numbers of compact maps on Hilbert space to its approximation numbers. Compared with previous works in this area, it is particularly convenient for dealing with the cases where the approximation numbers decay rapidly. A nice estimation between entropy and approximation numbers for noncompact maps is given.

  6. Saddlepoint approximation methods in financial engineering

    CERN Document Server

    Kwok, Yue Kuen

    2018-01-01

    This book summarizes recent advances in applying saddlepoint approximation methods to financial engineering. It addresses pricing exotic financial derivatives and calculating risk contributions to Value-at-Risk and Expected Shortfall in credit portfolios under various default correlation models. These standard problems involve the computation of tail probabilities and tail expectations of the corresponding underlying state variables.  The text offers in a single source most of the saddlepoint approximation results in financial engineering, with different sets of ready-to-use approximation formulas. Much of this material may otherwise only be found in original research publications. The exposition and style are made rigorous by providing formal proofs of most of the results. Starting with a presentation of the derivation of a variety of saddlepoint approximation formulas in different contexts, this book will help new researchers to learn the fine technicalities of the topic. It will also be valuable to quanti...

  7. Approximating centrality in evolving graphs: toward sublinearity

    Science.gov (United States)

    Priest, Benjamin W.; Cybenko, George

    2017-05-01

    The identification of important nodes is a ubiquitous problem in the analysis of social networks. Centrality indices (such as degree centrality, closeness centrality, betweenness centrality, PageRank, and others) are used across many domains to accomplish this task. However, the computation of such indices is expensive on large graphs. Moreover, evolving graphs are becoming increasingly important in many applications. It is therefore desirable to develop on-line algorithms that can approximate centrality measures using memory sublinear in the size of the graph. We discuss the challenges facing the semi-streaming computation of many centrality indices. In particular, we apply recent advances in the streaming and sketching literature to provide a preliminary streaming approximation algorithm for degree centrality utilizing CountSketch and a multi-pass semi-streaming approximation algorithm for closeness centrality leveraging a spanner obtained through iteratively sketching the vertex-edge adjacency matrix. We also discuss possible ways forward for approximating betweenness centrality, as well as spectral measures of centrality. We provide a preliminary result using sketched low-rank approximations to approximate the output of the HITS algorithm.

  8. Improved approximate inspirals of test bodies into Kerr black holes

    International Nuclear Information System (INIS)

    Gair, Jonathan R; Glampedakis, Kostas

    2006-01-01

    We present an improved version of the approximate scheme for generating inspirals of test bodies into a Kerr black hole recently developed by Glampedakis, Hughes and Kennefick. Their original 'hybrid' scheme was based on combining exact relativistic expressions for the evolution of the orbital elements (the semilatus rectum p and eccentricity e) with an approximate, weak-field, formula for the energy and angular momentum fluxes, amended by the assumption of constant inclination angle ι during the inspiral. Despite the fact that the resulting inspirals were overall well behaved, certain pathologies remained for orbits in the strong-field regime and for orbits which are nearly circular and/or nearly polar. In this paper we eliminate these problems by incorporating an array of improvements in the approximate fluxes. First, we add certain corrections which ensure the correct behavior of the fluxes in the limit of vanishing eccentricity and/or 90 deg. inclination. Second, we use higher order post-Newtonian formulas, adapted for generic orbits. Third, we drop the assumption of constant inclination. Instead, we first evolve the Carter constant by means of an approximate post-Newtonian expression and subsequently extract the evolution of ι. Finally, we improve the evolution of circular orbits by using fits to the angular momentum and inclination evolution determined by Teukolsky-based calculations. As an application of our improved scheme, we provide a sample of generic Kerr inspirals which we expect to be the most accurate to date, and for the specific case of nearly circular orbits we locate the critical radius where orbits begin to decircularize under radiation reaction. These easy-to-generate inspirals should become a useful tool for exploring LISA data analysis issues and may ultimately play a role in the detection of inspiral signals in the LISA data

  9. The approximation function of bridge deck vibration derived from the measured eigenmodes

    Directory of Open Access Journals (Sweden)

    Sokol Milan

    2017-12-01

    Full Text Available This article deals with a method of how to acquire approximate displacement vibration functions. Input values are discrete, experimentally obtained mode shapes. A new improved approximation method based on the modal vibrations of the deck is derived using the least-squares method. An alternative approach to be employed in this paper is to approximate the displacement vibration function by a sum of sine functions whose periodicity is determined by spectral analysis adapted for non-uniformly sampled data and where the parameters of scale and phase are estimated as usual by the least-squares method. Moreover, this periodic component is supplemented by a cubic regression spline (fitted on its residuals that captures individual displacements between piers. The statistical evaluation of the stiffness parameter is performed using more vertical modes obtained from experimental results. The previous method (Sokol and Flesch, 2005, which was derived for near the pier areas, has been enhanced to the whole length of the bridge. The experimental data describing the mode shapes are not appropriate for direct use. Especially the higher derivatives calculated from these data are very sensitive to data precision.

  10. 7 CFR 51.17 - Official sampling.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Official sampling. 51.17 Section 51.17 Agriculture... Inspection Service § 51.17 Official sampling. Samples may be officially drawn by any duly authorized... time and place of the sampling and the brands or other identifying marks of the containers from which...

  11. Maintaining continuity of knowledge on safeguards samples

    International Nuclear Information System (INIS)

    Franssen, F.; Islam, A.B.M.N.; Sonnier, C.; Schoeneman, J.L.; Baumann, M.

    1992-01-01

    The conclusions of the vulnerability test on VOPAN (verification of Operator's Analysis) as conducted at Safeguards Analytical Laboratory (ASA) at Seibersdorf, Austria in October 1990 and documented in STR-266, indicate that ''whenever samples are taken for safeguards purposes extreme care must be taken to ensure that they have not been interfered with during the sample taking, transportation, storage or sample preparation process.'' Indeed there exist a number of possibilities to alter the content of a safeguards sample vial from the moment of sampling up to the arrival of the treated (or untreated) sample at SAL. The time lapse between these two events can range from a few days up to months. The sample history over this period can be subdivided into three main sub-periods: (1) the period from when the sampling activities are commenced up to the treatment in the operator's laboratory, (2) during treatment of samples in the operator's laboratory, and finally, (3) the period between that treatment and the arrival of the sample at SAL. A combined effort between the Agency and the United States Support Program to the Agency (POTAS) has resulted in two active tasks and one proposed task to investigate improving the maintenance of continuity of knowledge on safeguards samples during the entire period of their existence. This paper describes the use of the Sample Vial Secure Container (SVSC), of the Authenticated Secure Container System (ASCS), and of the Secure Container for Storage and Transportation of samples (SCST) to guarantee that a representative portion of the solution sample will be received at SAL

  12. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    Science.gov (United States)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  13. A Case Study on Air Combat Decision Using Approximated Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Yaofei Ma

    2014-01-01

    Full Text Available As a continuous state space problem, air combat is difficult to be resolved by traditional dynamic programming (DP with discretized state space. The approximated dynamic programming (ADP approach is studied in this paper to build a high performance decision model for air combat in 1 versus 1 scenario, in which the iterative process for policy improvement is replaced by mass sampling from history trajectories and utility function approximating, leading to high efficiency on policy improvement eventually. A continuous reward function is also constructed to better guide the plane to find its way to “winner” state from any initial situation. According to our experiments, the plane is more offensive when following policy derived from ADP approach other than the baseline Min-Max policy, in which the “time to win” is reduced greatly but the cumulated probability of being killed by enemy is higher. The reason is analyzed in this paper.

  14. Axiomatic Characterizations of IVF Rough Approximation Operators

    Directory of Open Access Journals (Sweden)

    Guangji Yu

    2014-01-01

    Full Text Available This paper is devoted to the study of axiomatic characterizations of IVF rough approximation operators. IVF approximation spaces are investigated. The fact that different IVF operators satisfy some axioms to guarantee the existence of different types of IVF relations which produce the same operators is proved and then IVF rough approximation operators are characterized by axioms.

  15. Stability of odorants from pig production in sampling bags for olfactometry.

    Science.gov (United States)

    Hansen, Michael J; Adamsen, Anders P S; Feilberg, Anders; Jonassen, Kristoffer E N

    2011-01-01

    Odor from pig production facilities is typically measured with olfactometry, whereby odor samples are collected in sampling bags and assessed by human panelists within 30 h. In the present study, the storage stability of odorants in two types of sampling bags that are often used for olfactometry was investigated. The bags were made of Tedlar or Nalophan. In a field experiment, humid and dried air samples were collected from a pig production facility with growing-finishing pigs and analyzed with a gas chromatograph with an amperometric sulfur detector at 4, 8, 12, 28, 52, and 76 h after sampling. In a laboratory experiment, the bags were filled with a humid gas mixture containing carboxylic acids, phenols, indoles, and sulfur compounds and analyzed with proton-transfer-reaction mass spectrometry after 0, 4, 8, 12, and 24 h. The results demonstrated that the concentrations of carboxylic acids, phenols, and indoles decreased by 50 to >99% during the 24 h of storage in Tedlar and Nalophan bags. The concentration of hydrogen sulfide decreased by approximately 30% during the 24 h of storage in Nalophan bags, whereas in Tedlar bags the concentration of sulfur compounds decreased by bags, and the composition changes toward a higher relative presence of sulfur compounds. This can result in underestimation of odor emissions from pig production facilities and of the effect of odor reduction technologies. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  16. Approximation properties of fine hyperbolic graphs

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... In this paper, we propose a definition of approximation property which is called the metric invariant translation approximation property for a countable discrete metric space. Moreover, we use ... Department of Applied Mathematics, Shanghai Finance University, Shanghai 201209, People's Republic of China ...

  17. Efficient automata constructions and approximate automata

    NARCIS (Netherlands)

    Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.

    2008-01-01

    In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern

  18. Efficient automata constructions and approximate automata

    NARCIS (Netherlands)

    Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.; Holub, J.; Zdárek, J.

    2006-01-01

    In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern

  19. Time line cell tracking for the approximation of lagrangian coherent structures with subgrid accuracy

    KAUST Repository

    Kuhn, Alexander

    2013-12-05

    Lagrangian coherent structures (LCSs) have become a widespread and powerful method to describe dynamic motion patterns in time-dependent flow fields. The standard way to extract LCS is to compute height ridges in the finite-time Lyapunov exponent field. In this work, we present an alternative method to approximate Lagrangian features for 2D unsteady flow fields that achieve subgrid accuracy without additional particle sampling. We obtain this by a geometric reconstruction of the flow map using additional material constraints for the available samples. In comparison to the standard method, this allows for a more accurate global approximation of LCS on sparse grids and for long integration intervals. The proposed algorithm works directly on a set of given particle trajectories and without additional flow map derivatives. We demonstrate its application for a set of computational fluid dynamic examples, as well as trajectories acquired by Lagrangian methods, and discuss its benefits and limitations. © 2013 The Authors Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  20. Approximation of the semi-infinite interval

    Directory of Open Access Journals (Sweden)

    A. McD. Mercer

    1980-01-01

    Full Text Available The approximation of a function f∈C[a,b] by Bernstein polynomials is well-known. It is based on the binomial distribution. O. Szasz has shown that there are analogous approximations on the interval [0,∞ based on the Poisson distribution. Recently R. Mohapatra has generalized Szasz' result to the case in which the approximating function is αe−ux∑k=N∞(uxkα+β−1Γ(kα+βf(kαuThe present note shows that these results are special cases of a Tauberian theorem for certain infinite series having positive coefficients.

  1. 'LTE-diffusion approximation' for arc calculations

    International Nuclear Information System (INIS)

    Lowke, J J; Tanaka, M

    2006-01-01

    This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on D e /W, where D e is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode

  2. Bayesian posterior sampling via stochastic gradient Fisher scoring

    NARCIS (Netherlands)

    Ahn, S.; Korattikara, A.; Welling, M.; Langford, J.; Pineau, J.

    2012-01-01

    In this paper we address the following question: "Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?". An algorithm based on the Langevin equation with stochastic gradients (SGLD) was

  3. Surface Brightness Profiles of Composite Images of Compact Galaxies at Z approximately equal 4-6 in the Hubble Ultra Deep Field

    National Research Council Canada - National Science Library

    Hathi, N. P; Jansen, R. A; Windhorst, R. A; Cohen, S. H; Keel, W. C; Corbin, M. R; Ryan, Jr, R. E

    2007-01-01

    The Hubble Ultra Deep Field (HUDF) contains a significant number of B-, V-, and iota'-band dropout objects, many of which were recently confirmed to be young star-forming galaxies at Z approximately equal 4-6...

  4. Permeability of gypsum samples dehydrated in air

    Science.gov (United States)

    Milsch, Harald; Priegnitz, Mike; Blöcher, Guido

    2011-09-01

    We report on changes in rock permeability induced by devolatilization reactions using gypsum as a reference analog material. Cylindrical samples of natural alabaster were dehydrated in air (dry) for up to 800 h at ambient pressure and temperatures between 378 and 423 K. Subsequently, the reaction kinetics, so induced changes in porosity, and the concurrent evolution of sample permeability were constrained. Weighing the heated samples in predefined time intervals yielded the reaction progress where the stoichiometric mass balance indicated an ultimate and complete dehydration to anhydrite regardless of temperature. Porosity showed to continuously increase with reaction progress from approximately 2% to 30%, whilst the initial bulk volume remained unchanged. Within these limits permeability significantly increased with porosity by almost three orders of magnitude from approximately 7 × 10-19 m2 to 3 × 10-16 m2. We show that - when mechanical and hydraulic feedbacks can be excluded - permeability, reaction progress, and porosity are related unequivocally.

  5. Speciation of arsenic in biological samples.

    Science.gov (United States)

    Mandal, Badal Kumar; Ogra, Yasumitsu; Anzai, Kazunori; Suzuki, Kazuo T

    2004-08-01

    Speciation of arsenicals in biological samples is an essential tool to gain insight into its distribution in tissues and its species-specific toxicity to target organs. Biological samples (urine, hair, fingernail) examined in the present study were collected from 41 people of West Bengal, India, who were drinking arsenic (As)-contaminated water, whereas 25 blood and urine samples were collected from a population who stopped drinking As contaminated water 2 years before the blood collection. Speciation of arsenicals in urine, water-methanol extract of freeze-dried red blood cells (RBCs), trichloroacetic acid treated plasma, and water extract of hair and fingernail was carried out by high-performance liquid chromatography (HPLC)-inductively coupled argon plasma mass spectrometry (ICP MS). Urine contained arsenobetaine (AsB, 1.0%), arsenite (iAs(III), 11.3), arsenate (iAs(V), 10.1), monomethylarsonous acid (MMA(III), 6.6), monomethylarsonic acid (MMA(V), 10.5), dimethylarsinous acid (DMA(III), 13.0), and dimethylarsinic acid (DMA(V), 47.5); fingernail contained iAs(III) (62.4%), iAs(V) (20.2), MMA(V) (5.7), DMA(III) (8.9), and DMA(V) (2.8); hair contained iAs(III) (58.9%), iAs(V) (34.8), MMA(V) (2.9), and DMA(V) (3.4); RBCs contained AsB (22.5%) and DMA(V) (77.5); and blood plasma contained AsB (16.7%), iAs(III) (21.1), MMA(V) (27.1), and DMA(V) (35.1). MMA(III), DMA(III), and iAs(V) were not found in any plasma and RBCs samples, but urine contained all of them. Arsenic in urine, fingernails, and hair are positively correlated with water As, suggesting that any of these measurements could be considered as a biomarker to As exposure. Status of urine and exogenous contamination of hair urgently need speciation of As in these samples, but speciation of As in nail is related to its total As (tAs) concentration. Therefore, total As concentrations of nails could be considered as biomarker to As exposure in the endemic areas.

  6. Speciation of arsenic in biological samples

    International Nuclear Information System (INIS)

    Mandal, Badal Kumar; Ogra, Yasumitsu; Anzai, Kazunori; Suzuki, Kazuo T.

    2004-01-01

    Speciation of arsenicals in biological samples is an essential tool to gain insight into its distribution in tissues and its species-specific toxicity to target organs. Biological samples (urine, hair, fingernail) examined in the present study were collected from 41 people of West Bengal, India, who were drinking arsenic (As)-contaminated water, whereas 25 blood and urine samples were collected from a population who stopped drinking As contaminated water 2 years before the blood collection. Speciation of arsenicals in urine, water-methanol extract of freeze-dried red blood cells (RBCs), trichloroacetic acid treated plasma, and water extract of hair and fingernail was carried out by high-performance liquid chromatography (HPLC)-inductively coupled argon plasma mass spectrometry (ICP MS). Urine contained arsenobetaine (AsB, 1.0%), arsenite (iAs III , 11.3), arsenate (iAs V , 10.1), monomethylarsonous acid (MMA III , 6.6), monomethylarsonic acid (MMA V , 10.5), dimethylarsinous acid (DMA III , 13.0), and dimethylarsinic acid (DMA V , 47.5); fingernail contained iAs III (62.4%), iAs V (20.2), MMA V (5.7), DMA III (8.9), and DMA V (2.8); hair contained iAs III (58.9%), iAs V (34.8), MMA V (2.9), and DMA V (3.4); RBCs contained AsB (22.5%) and DMA V (77.5); and blood plasma contained AsB (16.7%), iAs III (21.1), MMA V (27.1), and DMA V (35.1). MMA III , DMA III , and iAs V were not found in any plasma and RBCs samples, but urine contained all of them. Arsenic in urine, fingernails, and hair are positively correlated with water As, suggesting that any of these measurements could be considered as a biomarker to As exposure. Status of urine and exogenous contamination of hair urgently need speciation of As in these samples, but speciation of As in nail is related to its total As (tAs) concentration. Therefore, total As concentrations of nails could be considered as biomarker to As exposure in the endemic areas

  7. Activities in support of licensing Ontario Hydro's Dry Storage Container for radioactive waste transportation

    International Nuclear Information System (INIS)

    Boag, J.M.; Lee, H.P.; Nadeau, E.; Taralis, D.; Sauve, R.G.

    1993-01-01

    The Dry Storage Container (DSC) is being developed by Ontario Hydro for the on-site storage and possible future transportation of used fuel. The DSC is essentially rectangular in shape with outer dimensions being approximately 3.5 m (H) x 2.1 m (W) x 2.2 m (L) and has a total weight of approximately 68 Mg when loaded with used fuel. The container cavity is designed to accommodate four standard fuel modules (each module contains 96 CANDU fuel bundles). The space between inner and outer steel linear (each about 12.7 mm thick) is filled with high-density reinforced shielding concrete (approximately 500 mm thick). Foam-core steel-lined impact limiters will be fitted around the container during transportation to provide impact protection. In addition, an armour ring will be installed around the flanged closure weld (inside the impact limiter) to provide protection from accidental pin impact. Testing and impact analyses have demonstrated that the DSC was able to withstand a 9 m top corner drop and a 1 m drop onto a cylindrical pin (at the welded containment flange) without compromising the structural integrity of the DSC. Thermal analysis of the DSC during simulated fire accident conditions has shown that at the end of the fire, the exterior wall and interior cavity wall temperatures were 503degC and 78degC, respectively. The maximum fuel sheath temperature predicted was 137degC which was below the maximum allowable temperature for the fuel. The FD-HEAT code used for this analysis was validated through a heat conduction test of an actual DSC wall section. (J.P.N.)

  8. Nonlinear approximation with general wave packets

    DEFF Research Database (Denmark)

    Borup, Lasse; Nielsen, Morten

    2005-01-01

    We study nonlinear approximation in the Triebel-Lizorkin spaces with dictionaries formed by dilating and translating one single function g. A general Jackson inequality is derived for best m-term approximation with such dictionaries. In some special cases where g has a special structure, a complete...

  9. Approximations for stop-loss reinsurance premiums

    NARCIS (Netherlands)

    Reijnen, Rajko; Albers, Willem/Wim; Kallenberg, W.C.M.

    2005-01-01

    Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are

  10. Residual Stresses In 3013 Containers

    International Nuclear Information System (INIS)

    Mickalonis, J.; Dunn, K.

    2009-01-01

    The DOE Complex is packaging plutonium-bearing materials for storage and eventual disposition or disposal. The materials are handled according to the DOE-STD-3013 which outlines general requirements for stabilization, packaging and long-term storage. The storage vessels for the plutonium-bearing materials are termed 3013 containers. Stress corrosion cracking has been identified as a potential container degradation mode and this work determined that the residual stresses in the containers are sufficient to support such cracking. Sections of the 3013 outer, inner, and convenience containers, in both the as-fabricated condition and the closure welded condition, were evaluated per ASTM standard G-36. The standard requires exposure to a boiling magnesium chloride solution, which is an aggressive testing solution. Tests in a less aggressive 40% calcium chloride solution were also conducted. These tests were used to reveal the relative stress corrosion cracking susceptibility of the as fabricated 3013 containers. Significant cracking was observed in all containers in areas near welds and transitions in the container diameter. Stress corrosion cracks developed in both the lid and the body of gas tungsten arc welded and laser closure welded containers. The development of stress corrosion cracks in the as-fabricated and in the closure welded container samples demonstrates that the residual stresses in the 3013 containers are sufficient to support stress corrosion cracking if the environmental conditions inside the containers do not preclude the cracking process.

  11. Approximation properties of haplotype tagging

    Directory of Open Access Journals (Sweden)

    Dreiseitl Stephan

    2006-01-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.

  12. Approximation for the adjoint neutron spectrum

    International Nuclear Information System (INIS)

    Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da

    2002-01-01

    The proposal of this work is the determination of an analytical approximation which is capable to reproduce the adjoint neutron flux for the energy range of the narrow resonances (NR). In a previous work we developed a method for the calculation of the adjoint spectrum which was calculated from the adjoint neutron balance equations, that were obtained by the collision probabilities method, this method involved a considerable quantity of numerical calculation. In the analytical method some approximations were done, like the multiplication of the escape probability in the fuel by the adjoint flux in the moderator, and after these approximations, taking into account the case of the narrow resonances, were substituted in the adjoint neutron balance equation for the fuel, resulting in an analytical approximation for the adjoint flux. The results obtained in this work were compared to the results generated with the reference method, which demonstrated a good and precise results for the adjoint neutron flux for the narrow resonances. (author)

  13. Optical transmission testing based on asynchronous sampling techniques: images analysis containing chromatic dispersion using convolutional neural network

    Science.gov (United States)

    Mrozek, T.; Perlicki, K.; Tajmajer, T.; Wasilewski, P.

    2017-08-01

    The article presents an image analysis method, obtained from an asynchronous delay tap sampling (ADTS) technique, which is used for simultaneous monitoring of various impairments occurring in the physical layer of the optical network. The ADTS method enables the visualization of the optical signal in the form of characteristics (so called phase portraits) that change their shape under the influence of impairments such as chromatic dispersion, polarization mode dispersion and ASE noise. Using this method, a simulation model was built with OptSim 4.0. After the simulation study, data were obtained in the form of images that were further analyzed using the convolutional neural network algorithm. The main goal of the study was to train a convolutional neural network to recognize the selected impairment (distortion); then to test its accuracy and estimate the impairment for the selected set of test images. The input data consisted of processed binary images in the form of two-dimensional matrices, with the position of the pixel. This article focuses only on the analysis of images containing chromatic dispersion.

  14. Improvements in and relating to the incubation of samples

    International Nuclear Information System (INIS)

    Bagshawe, K.D.

    1978-01-01

    Apparatus is described for incubating a plurality of biological samples and particularly as part of an analysis, e.g. radioimmunoassay or enzyme assay, of the samples. The apparatus is comprised of an incubation station with a plurality of containers to which samples together with diluent and reagents are supplied. The containers are arranged in rows in two side-by-side columns and are circulated sequentially. Sample removal means is provided either at a fixed location or at a movable point relative to the incubator. Circulation of the containers and the length of sample incubation time is controlled by a computer. The incubation station may include a plurality of sections with the columns in communication so that rows of samples can be moved from the column of one section to the column of an adjacent section, to provide alternative paths for circulation of the samples. (author)

  15. Tank 241-AX-101, grab samples, 1AX-97-1 through 1AX-97-3 analytical results for the final report

    International Nuclear Information System (INIS)

    Esch, R.A.

    1997-01-01

    This document is the final report for tank 241-AX-101 grab samples. Four grab samples were collected from riser 5B on July 29, 1997. Analyses were performed on samples 1AX-97-1, 1AX-97-2 and 1AX-97-3 in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Rev. 1: Fowler, 1995; Rev. 2: Mulkey and Miller, 1997). The analytical results are presented in Table 1. No notification limits were exceeded. All four samples contained settled solids that appeared to be large salt crystals that precipitated upon cooling to ambient temperature. Less than 25 % settled solids were present in the first three samples, therefore only the supernate was sampled and analyzed. Sample 1AX-97-4 contained approximately 25.3 % settled solids. Compatibility analyses were not performed on this sample. Attachment 1 is provided as a cross-reference for relating the tank farm customer identification numbers with the 222-S Laboratory sample numbers and the portion of sample analyzed. Table 2 provides the appearance information. All four samples contained settled solids that appeared to be large salt crystal that precipitated upon cooling to ambient temperature. The settled solids in samples 1AX-97-1, 1AX-97-2 and 1AX-97-3 were less than 25% by volume. Therefore, for these three samples, two 15-mL subsamples were pipetted to the surface of the liquid and submitted to the laboratory for analysis. In addition, a portion of the liquid was taken from each of the these three samples to perform an acidified ammonia analysis. No analysis was performed on the settled solid portion of the samples. Sample 1AX-97-4 was reserved for the Process Chemistry group to perform boil down and dissolution testing in accordance with Letter of Instruction for Non-Routine Analysis of Single-Shell Tank 241-AX-101 Grab Samples (Field, 1997) (Correspondence 1). However, prior to the analysis, the sample was inadvertently

  16. Tank 241-AX-101 grab samples 1AX-97-1 through 1AX-97-3 analytical results for the final report

    Energy Technology Data Exchange (ETDEWEB)

    Esch, R.A.

    1997-11-13

    This document is the final report for tank 241-AX-101 grab samples. Four grab samples were collected from riser 5B on July 29, 1997. Analyses were performed on samples 1AX-97-1, 1AX-97-2 and 1AX-97-3 in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Rev. 1: Fowler, 1995; Rev. 2: Mulkey and Miller, 1997). The analytical results are presented in Table 1. No notification limits were exceeded. All four samples contained settled solids that appeared to be large salt crystals that precipitated upon cooling to ambient temperature. Less than 25 % settled solids were present in the first three samples, therefore only the supernate was sampled and analyzed. Sample 1AX-97-4 contained approximately 25.3 % settled solids. Compatibility analyses were not performed on this sample. Attachment 1 is provided as a cross-reference for relating the tank farm customer identification numbers with the 222-S Laboratory sample numbers and the portion of sample analyzed. Table 2 provides the appearance information. All four samples contained settled solids that appeared to be large salt crystal that precipitated upon cooling to ambient temperature. The settled solids in samples 1AX-97-1, 1AX-97-2 and 1AX-97-3 were less than 25% by volume. Therefore, for these three samples, two 15-mL subsamples were pipetted to the surface of the liquid and submitted to the laboratory for analysis. In addition, a portion of the liquid was taken from each of the these three samples to perform an acidified ammonia analysis. No analysis was performed on the settled solid portion of the samples. Sample 1AX-97-4 was reserved for the Process Chemistry group to perform boil down and dissolution testing in accordance with Letter of Instruction for Non-Routine Analysis of Single-Shell Tank 241-AX-101 Grab Samples (Field, 1997) (Correspondence 1). However, prior to the analysis, the sample was inadvertently

  17. Quirks of Stirling's Approximation

    Science.gov (United States)

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  18. Development of a protocol for sampling and analysis of ballast water in Jamaica

    Directory of Open Access Journals (Sweden)

    Achsah A Mitchell

    2014-09-01

    Full Text Available The transfer of ballast by the international shipping industry has negatively impacted the environment. To design such a protocol for the area, the ballast water tanks of seven bulk cargo vessels entering a Jamaican port were sampled between January 28, 2010 and August 17, 2010. Vessels originated from five ports and used three main routes, some of which conducted ballast water exchange. Twenty-six preserved and 22 live replicate zooplankton samples were obtained. Abundance and richness were higher than at temperate ports. Exchange did not alter the biotic composition but reduced the abundance. Two of the live sample replicates, containing 31.67 and 16.75 viable individuals m-3, were non-compliant with the International Convention for the Control and Management of Ships’ Ballast Water and Sediments. Approximately 12% of the species identified in the ballast water were present in the waters nearest the port in 1995 and 11% were present in the entire bay in 2005. The protocol designed from this study can be used to aid the establishment of a ballast water management system in the Caribbean or used as a foundation for the development of further protocols.

  19. Sampling and analysis plan for the consolidated sludge samples from the canisters and floor of the 105-K East basin

    International Nuclear Information System (INIS)

    BAKER, R.B.

    1999-01-01

    This Sampling and Analysis Plan (SAP) provides direction for sampling of fuel canister and floor Sludge from the K East Basin to complete the inventory of samples needed for Sludge treatment process testing. Sample volumes and sources consider recent reviews made by the Sludge treatment subproject. The representative samples will be characterized to the extent needed for the material to be used effectively for testing. Sampling equipment used allows drawing of large volume sludge samples and consolidation of sample material from a number of basin locations into one container. Once filled, the containers will be placed in a cask and transported to Hanford laboratories for recovery and evaluation. Included in the present SAP are the logic for sample location selection, laboratory analysis procedures required, and reporting needed to meet the Data Quality Objectives (DQOs) for this initiative

  20. AUTOMATED ANALYSIS OF AQUEOUS SAMPLES CONTAINING PESTICIDES, ACIDIC/BASIC/NEUTRAL SEMIVOLATILES AND VOLATILE ORGANIC COMPOUNDS BY SOLID PHASE EXTRACTION COUPLED IN-LINE TO LARGE VOLUME INJECTION GC/MS

    Science.gov (United States)

    Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line 10-m...

  1. Approximations to camera sensor noise

    Science.gov (United States)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  2. APPROXIMATIONS TO PERFORMANCE MEASURES IN QUEUING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kambo, N. S.

    2012-11-01

    Full Text Available Approximations to various performance measures in queuing systems have received considerable attention because these measures have wide applicability. In this paper we propose two methods to approximate the queuing characteristics of a GI/M/1 system. The first method is non-parametric in nature, using only the first three moments of the arrival distribution. The second method treads the known path of approximating the arrival distribution by a mixture of two exponential distributions by matching the first three moments. Numerical examples and optimal analysis of performance measures of GI/M/1 queues are provided to illustrate the efficacy of the methods, and are compared with benchmark approximations.

  3. 40 CFR 763.86 - Sampling.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling. 763.86 Section 763.86 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Asbestos-Containing Materials in Schools § 763.86 Sampling. (a) Surfacing material. An accredited inspector...

  4. Determination of colloidal and dissolved silver in water samples using colorimetric solid-phase extraction.

    Science.gov (United States)

    Hill, April A; Lipert, Robert J; Porter, Marc D

    2010-03-15

    The increase in bacterial resistance to antibiotics has led to resurgence in the use of silver as a biocidal agent in applications ranging from washing machine additives to the drinking water treatment system on the International Space Station (ISS). However, growing concerns about the possible toxicity of colloidal silver to bacteria, aquatic organisms and humans have led to recently issued regulations by the US EPA and FDA regarding the usage of silver. As part of an ongoing project, we have developed a rapid, simple method for determining total silver, both ionic (silver(I)) and colloidal, in 0.1-1mg/L aqueous samples, which spans the ISS potable water target of 0.3-0.5mg/L (total silver) and meets the US EPA limit of 0.1mg/L in drinking water. The method is based on colorimetric solid-phase extraction (C-SPE) and involves the extraction of silver(I) from water samples by passage through a solid-phase membrane impregnated with the colorimetric reagent DMABR (5-[4-(dimethylamino)benzylidene]rhodanine). Silver(I) exhaustively reacts with impregnated DMABR to form a colored compound, which is quantified using a handheld diffuse reflectance spectrophotometer. Total silver is determined by first passing the sample through a cartridge containing Oxone, which exhaustively oxidizes colloidal silver to dissolved silver(I). The method, which takes less than 2 min to complete and requires only approximately 1 mL of sample, has been validated through a series of tests, including a comparison with the ICP-MS analysis of a water sample from ISS that contained both silver(I) and colloidal silver. Potential earth-bound applications are also briefly discussed. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  5. Hydrogen distribution studies relevant to CANDU containments

    International Nuclear Information System (INIS)

    Krause, M.; Whitehouse, D.R.; Chan, C.K.; Jones, S.C.A.

    1995-01-01

    Following a loss of coolant accident with coincident loss of emergency core cooling, hydrogen may be produced in a CANDU reactor from the in-core Zircaloy-steam reaction, and released into containment. To meet the requirements for predicting containment hydrogen distribution, and to support measures for mitigation, a computer code GOTHIC is used. Simulations of gas mixing were performed using simple well defined experiments in a small-scale compartment, helium being substituted for hydrogen. At the time of the conference, results indicated that GOTHIC could quantitatively predict the stratified gas distribution resulting from buoyant gas injection near the bottom of an unobstructed compartment. When gas was injected near the top, GOTHIC underpredicted maximum gas concentration at the top, and overpredicted mixing. These errors arise from the finite-volume approximation. 2 refs., 11 figs

  6. Improved radiative corrections for (e,e'p) experiments: Beyond the peaking approximation and implications of the soft-photon approximation

    International Nuclear Information System (INIS)

    Weissbach, F.; Hencken, K.; Rohe, D.; Sick, I.; Trautmann, D.

    2006-01-01

    Analyzing (e,e ' p) experimental data involves corrections for radiative effects which change the interaction kinematics and which have to be carefully considered in order to obtain the desired accuracy. Missing momentum and energy due to bremsstrahlung have so far often been incorporated into the simulations and the experimental analyses using the peaking approximation. It assumes that all bremsstrahlung is emitted in the direction of the radiating particle. In this article we introduce a full angular Monte Carlo simulation method which overcomes this approximation. As a test, the angular distribution of the bremsstrahlung photons is reconstructed from H(e,e ' p) data. Its width is found to be underestimated by the peaking approximation and described much better by the approach developed in this work. The impact of the soft-photon approximation on the photon angular distribution is found to be minor as compared to the impact of the peaking approximation. (orig.)

  7. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  8. An Improved Direction Finding Algorithm Based on Toeplitz Approximation

    Directory of Open Access Journals (Sweden)

    Qing Wang

    2013-01-01

    Full Text Available In this paper, a novel direction of arrival (DOA estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments.

  9. Toward a consistent random phase approximation based on the relativistic Hartree approximation

    International Nuclear Information System (INIS)

    Price, C.E.; Rost, E.; Shepard, J.R.; McNeil, J.A.

    1992-01-01

    We examine the random phase approximation (RPA) based on a relativistic Hartree approximation description for nuclear ground states. This model includes contributions from the negative energy sea at the one-loop level. We emphasize consistency between the treatment of the ground state and the RPA. This consistency is important in the description of low-lying collective levels but less important for the longitudinal (e,e') quasielastic response. We also study the effect of imposing a three-momentum cutoff on negative energy sea contributions. A cutoff of twice the nucleon mass improves agreement with observed spin-orbit splittings in nuclei compared to the standard infinite cutoff results, an effect traceable to the fact that imposing the cutoff reduces m * /m. Consistency is much more important than the cutoff in the description of low-lying collective levels. The cutoff model also provides excellent agreement with quasielastic (e,e') data

  10. Algorithm for computing significance levels using the Kolmogorov-Smirnov statistic and valid for both large and small samples

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, S.E.; Fields, D.E.

    1983-10-01

    The KSTEST code presented here is designed to perform the Kolmogorov-Smirnov one-sample test. The code may be used as a stand-alone program or the principal subroutines may be excerpted and used to service other programs. The Kolmogorov-Smirnov one-sample test is a nonparametric goodness-of-fit test. A number of codes to perform this test are in existence, but they suffer from the inability to provide meaningful results in the case of small sample sizes (number of values less than or equal to 80). The KSTEST code overcomes this inadequacy by using two distinct algorithms. If the sample size is greater than 80, an asymptotic series developed by Smirnov is evaluated. If the sample size is 80 or less, a table of values generated by Birnbaum is referenced. Valid results can be obtained from KSTEST when the sample contains from 3 to 300 data points. The program was developed on a Digital Equipment Corporation PDP-10 computer using the FORTRAN-10 language. The code size is approximately 450 card images and the typical CPU execution time is 0.19 s.

  11. A new approximation of the dispersion relations occurring in the sound-attenuation problem of turbofan aircraft engines

    Directory of Open Access Journals (Sweden)

    Robert SZABO

    2011-12-01

    Full Text Available The dispersion relations, appearing in the analysis of the stability of a gas flow in a straight acoustically-lined duct with respect to perturbations produced by a time harmonic source, beside the wave number and complex frequency contain the solution of a boundary value problem of the Pridmore-Brown equation depending on the wave number and frequency. For this reason, in practice the dispersion relations are rarely simple enough for carried out the zeros. The determination of zeros of these dispersion relations is crucial for the prediction of the perturbation attenuation or amplification. In this paper an approximation of the dispersion relations is given. Our approach preserves the general character of the mean flow, the general Pridmore-Brown equation and it’s only in the shear flow that we replace the exact solution of the boundary value problem with its Taylor polynomial approximate. In this way new approximate dispersion relations are obtained which zero’s can be found by computer.

  12. 7 CFR 52.774 - Fill of container.

    Science.gov (United States)

    2010-01-01

    ... containers representing a lot. Xd—A specified minimum sample average drained weight. LL—Lower limit for... average meets the specified minimum sample average drained weight (designated as “Xd” in Table I); and (ii... or cherry juice (ounces) LL Xd Packed in any sirup or slightly sweetened water (ounces) LL Xd No. 303...

  13. Computational methods and modeling. 1. Sampling a Position Uniformly in a Trilinear Hexahedral Volume

    International Nuclear Information System (INIS)

    Urbatsch, Todd J.; Evans, Thomas M.; Hughes, H. Grady

    2001-01-01

    Monte Carlo particle transport plays an important role in some multi-physics simulations. These simulations, which may additionally involve deterministic calculations, typically use a hexahedral or tetrahedral mesh. Trilinear hexahedrons are attractive for physics calculations because faces between cells are uniquely defined, distance-to-boundary calculations are deterministic, and hexahedral meshes tend to require fewer cells than tetrahedral meshes. We discuss one aspect of Monte Carlo transport: sampling a position in a tri-linear hexahedron, which is made up of eight control points, or nodes, and six bilinear faces, where each face is defined by four non-coplanar nodes in three-dimensional Cartesian space. We derive, code, and verify the exact sampling method and propose an approximation to it. Our proposed approximate method uses about one-third the memory and can be twice as fast as the exact sampling method, but we find that its inaccuracy limits its use to well-behaved hexahedrons. Daunted by the expense of the exact method, we propose an alternate approximate sampling method. First, calculate beforehand an approximate volume for each corner of the hexahedron by taking one-eighth of the volume of an imaginary parallelepiped defined by the corner node and the three nodes to which it is directly connected. For the sampling, assume separability in the parameters, and sample each parameter, in turn, from a linear pdf defined by the sum of the four corner volumes at each limit (-1 and 1) of the parameter. This method ignores the quadratic portion of the pdf, but it requires less storage, has simpler sampling, and needs no extra, on-the-fly calculations. We simplify verification by designing tests that consist of one or more cells that entirely fill a unit cube. Uniformly sampling complicated cells that fill a unit cube will result in uniformly sampling the unit cube. Unit cubes are easily analyzed. The first problem has four wedges (or tents, or A frames) whose

  14. Seismic wave extrapolation using lowrank symbol approximation

    KAUST Repository

    Fomel, Sergey

    2012-04-30

    We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.

  15. Efficient estimation for ergodic diffusions sampled at high frequency

    DEFF Research Database (Denmark)

    Sørensen, Michael

    A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...

  16. Approximation algorithms for guarding holey polygons ...

    African Journals Online (AJOL)

    Guarding edges of polygons is a version of art gallery problem.The goal is finding the minimum number of guards to cover the edges of a polygon. This problem is NP-hard, and to our knowledge there are approximation algorithms just for simple polygons. In this paper we present two approximation algorithms for guarding ...

  17. Approximation properties of fine hyperbolic graphs

    Indian Academy of Sciences (India)

    2010 Mathematics Subject Classification. 46L07. 1. Introduction. Given a countable discrete group G, some nice approximation properties for the reduced. C∗-algebras C∗ r (G) can give us the approximation properties of G. For example, Lance. [7] proved that the nuclearity of C∗ r (G) is equivalent to the amenability of G; ...

  18. CHOMIK -Sampling Device of Penetrating Type for Russian Phobos Sample Return Mission

    Science.gov (United States)

    Seweryn, Karol; Grygorczuk, Jerzy; Rickmann, Hans; Morawski, Marek; Aleksashkin, Sergey; Banaszkiewicz, Marek; Drogosz, Michal; Gurgurewicz, Joanna; Kozlov, Oleg E.; Krolikowska-Soltan, Malgorzata; Sutugin, Sergiej E.; Wawrzaszek, Roman; Wisniewski, Lukasz; Zakharov, Alexander

    Measurements of physical properties of planetary bodies allow to determine many important parameters for scientists working in different fields of research. For example effective heat conductivity of the regolith can help with better understanding of processes occurring in the body interior. Chemical and mineralogical composition gives us a chance to better understand the origin and evolution of the moons. In principle such parameters of the planetary bodies can be determined based on three different measurement techniques: (i) in situ measurements (ii) measurements of the samples in laboratory conditions at the Earth and (iii) remote sensing measurements. Scientific missions which allow us to perform all type of measurements, give us a chance for not only parameters determination but also cross calibration of the instruments. Russian Phobos Sample Return (PhSR) mission is one of few which allows for all type of such measurements. The spacecraft will be equipped with remote sensing instruments like: spectrometers, long wave radar and dust counter, instruments for in-situ measurements -gas-chromatograph, seismometer, thermodetector and others and also robotic arm and sampling device. PhSR mission will be launched in November 2011 on board of a launch vehicle Zenit. About a year later (11 months) the vehicle will reach the Martian orbit. It is anticipated that it will land on Phobos in the beginning of 2013. A take off back will take place a month later and the re-entry module containing a capsule that will hold the soil sample enclosed in a container will be on its way back to Earth. The 11 kg re-entry capsule with the container will land in Kazakhstan in mid-2014. A unique geological penetrator CHOMIK dedicated for the Phobos Sample Return space mis-sion will be designed and manufactured at the Space Mechatronics and Robotics Laboratory, Space Research Centre Polish Academy of Sciences (SRC PAS) in Warsaw. Functionally CHOMIK is based on the well known MUPUS

  19. SPENT NUCLEAR FUEL (SNF) PROJECT CANISTER STORAGE BUILDING (CSB) MULTI CANISTER OVERPACK (MCO) SAMPLING SYSTEM VALIDATION (OCRWM)

    International Nuclear Information System (INIS)

    BLACK, D.M.; KLEM, M.J.

    2003-01-01

    Approximately 400 Multi-canister overpacks (MCO) containing spent nuclear fuel are to be interim stored at the Canister Storage Building (CSB). Several MCOs (monitored MCOs) are designated to be gas sampled periodically at the CSB sampling/weld station (Bader 2002a). The monitoring program includes pressure, temperature and gas composition measurements of monitored MCOs during their first two years of interim storage at the CSB. The MCO sample cart (CART-001) is used at the sampling/weld station to measure the monitored MCO gas temperature and pressure, obtain gas samples for laboratory analysis and refill the monitored MCO with high purity helium as needed. The sample cart and support equipment were functionally and operationally tested and validated before sampling of the first monitored MCO (H-036). This report documents the results of validation testing using training MCO (TR-003) at the CSB. Another report (Bader 2002b) documents the sample results from gas sampling of the first monitored MCO (H-036). Validation testing of the MCO gas sampling system showed the equipment and procedure as originally constituted will satisfactorily sample the first monitored MCO. Subsequent system and procedural improvements will provide increased flexibility and reliability for future MCO gas sampling. The physical operation of the sampling equipment during testing provided evidence that theoretical correlation factors for extrapolating MCO gas composition from sample results are unnecessarily conservative. Empirically derived correlation factors showed adequate conservatism and support use of the sample system for ongoing monitored MCO sampling

  20. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  1. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    Science.gov (United States)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  2. ABrox-A user-friendly Python module for approximate Bayesian computation with a focus on model comparison.

    Science.gov (United States)

    Mertens, Ulf Kai; Voss, Andreas; Radev, Stefan

    2018-01-01

    We give an overview of the basic principles of approximate Bayesian computation (ABC), a class of stochastic methods that enable flexible and likelihood-free model comparison and parameter estimation. Our new open-source software called ABrox is used to illustrate ABC for model comparison on two prominent statistical tests, the two-sample t-test and the Levene-Test. We further highlight the flexibility of ABC compared to classical Bayesian hypothesis testing by computing an approximate Bayes factor for two multinomial processing tree models. Last but not least, throughout the paper, we introduce ABrox using the accompanied graphical user interface.

  3. When Density Functional Approximations Meet Iron Oxides.

    Science.gov (United States)

    Meng, Yu; Liu, Xing-Wu; Huo, Chun-Fang; Guo, Wen-Ping; Cao, Dong-Bo; Peng, Qing; Dearden, Albert; Gonze, Xavier; Yang, Yong; Wang, Jianguo; Jiao, Haijun; Li, Yongwang; Wen, Xiao-Dong

    2016-10-11

    Three density functional approximations (DFAs), PBE, PBE+U, and Heyd-Scuseria-Ernzerhof screened hybrid functional (HSE), were employed to investigate the geometric, electronic, magnetic, and thermodynamic properties of four iron oxides, namely, α-FeOOH, α-Fe 2 O 3 , Fe 3 O 4 , and FeO. Comparing our calculated results with available experimental data, we found that HSE (a = 0.15) (containing 15% "screened" Hartree-Fock exchange) can provide reliable values of lattice constants, Fe magnetic moments, band gaps, and formation energies of all four iron oxides, while standard HSE (a = 0.25) seriously overestimates the band gaps and formation energies. For PBE+U, a suitable U value can give quite good results for the electronic properties of each iron oxide, but it is challenging to accurately get other properties of the four iron oxides using the same U value. Subsequently, we calculated the Gibbs free energies of transformation reactions among iron oxides using the HSE (a = 0.15) functional and plotted the equilibrium phase diagrams of the iron oxide system under various conditions, which provide reliable theoretical insight into the phase transformations of iron oxides.

  4. Pawlak algebra and approximate structure on fuzzy lattice.

    Science.gov (United States)

    Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai

    2014-01-01

    The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties.

  5. Dynamical cluster approximation plus semiclassical approximation study for a Mott insulator and d-wave pairing

    Science.gov (United States)

    Kim, SungKun; Lee, Hunpyo

    2017-06-01

    Via a dynamical cluster approximation with N c = 4 in combination with a semiclassical approximation (DCA+SCA), we study the doped two-dimensional Hubbard model. We obtain a plaquette antiferromagnetic (AF) Mott insulator, a plaquette AF ordered metal, a pseudogap (or d-wave superconductor) and a paramagnetic metal by tuning the doping concentration. These features are similar to the behaviors observed in copper-oxide superconductors and are in qualitative agreement with the results calculated by the cluster dynamical mean field theory with the continuous-time quantum Monte Carlo (CDMFT+CTQMC) approach. The results of our DCA+SCA differ from those of the CDMFT+CTQMC approach in that the d-wave superconducting order parameters are shown even in the high doped region, unlike the results of the CDMFT+CTQMC approach. We think that the strong plaquette AF orderings in the dynamical cluster approximation (DCA) with N c = 4 suppress superconducting states with increasing doping up to strongly doped region, because frozen dynamical fluctuations in a semiclassical approximation (SCA) approach are unable to destroy those orderings. Our calculation with short-range spatial fluctuations is initial research, because the SCA can manage long-range spatial fluctuations in feasible computational times beyond the CDMFT+CTQMC tool. We believe that our future DCA+SCA calculations should supply information on the fully momentum-resolved physical properties, which could be compared with the results measured by angle-resolved photoemission spectroscopy experiments.

  6. Chemistry experiences from a containment fire at Ringhals unit 2

    International Nuclear Information System (INIS)

    Arvidsson, Bengt; Svanberg, Pernilla; Bengtsson, Bernt

    2012-09-01

    At the refuelling outage in Ringhals unit 2, during an ongoing Containment Air Test (CAT), a fire event occurred at the refuelling deck/entrance floor. Due to the ongoing CAT with high pressure, all attempts to enter the containment to extinguish the fire were impossible. The fire started as a short circuit (spark) in a vacuum cleaner that was left in the containment by mistake and then set fire to nearby material. Approximate, 5 kg of rubber material and 30 kg of plastic material were estimated to have been burned out. The complete inside of the containment from the level of fire (+115) up to the ceiling/dome (+156) was colored black by soot. Soot or smoke was also spread throughout the whole containment building and the cavity/reactor vessel was contaminated by a minor part of soot due to some unsuccessful covering of the area during CAT. All fuel assemblies were stored in the fuel pool building and not affected by the fire. During the pressure reduction of containment, high air humidity with moisture precipitation occurred and caused a rapid general corrosion on steel surfaces due to high levels of chloride in the soot together with some pitting on stainless steel piping. Immediate action was to install portable dryers to lower the humidity, followed by the start of an extensive cleaning program with objectives to prevent plant degradation/recontamination of soot and bring the unit back in operation. More than 1300 people including >150 cleaners has been appointed to this project. All piping, insulation, cables, >4.500 components, 6.340 m 2 of concrete wall surface and 5.100 m 2 of concrete floor surface have been cleaned during 7 months to an estimated cost of 20 million Euro. Chemistry staff has been deeply involved in sampling and chemical analysis of contaminated surfaces/objects as well as establishing criteria's for cleaning equipment, procedures and specifications. Approximately 12000 measurements have been performed using portable salt meters in

  7. Methods of Fourier analysis and approximation theory

    CERN Document Server

    Tikhonov, Sergey

    2016-01-01

    Different facets of interplay between harmonic analysis and approximation theory are covered in this volume. The topics included are Fourier analysis, function spaces, optimization theory, partial differential equations, and their links to modern developments in the approximation theory. The articles of this collection were originated from two events. The first event took place during the 9th ISAAC Congress in Krakow, Poland, 5th-9th August 2013, at the section “Approximation Theory and Fourier Analysis”. The second event was the conference on Fourier Analysis and Approximation Theory in the Centre de Recerca Matemàtica (CRM), Barcelona, during 4th-8th November 2013, organized by the editors of this volume. All articles selected to be part of this collection were carefully reviewed.

  8. Optimization and approximation

    CERN Document Server

    Pedregal, Pablo

    2017-01-01

    This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.

  9. MAVL wastes containers functional demonstration and associated tests program

    International Nuclear Information System (INIS)

    Templier, J.C.

    2002-01-01

    In the framework of studies on the MAVL wastes, the CEA develops containers for middle time wastes storage. This program aims to realize a ''B wastes containers'' demonstrator. A demonstrator is a container, parts of a container or samples which must validate the tests. This document presents the state of the study in the following three chapters: functions description, base data and design choices; presentation of the functional demonstrators; demonstration tests description. (A.L.B.)

  10. Uniform analytic approximation of Wigner rotation matrices

    Science.gov (United States)

    Hoffmann, Scott E.

    2018-02-01

    We derive the leading asymptotic approximation, for low angle θ, of the Wigner rotation matrix elements, dm1m2 j(θ ) , uniform in j, m1, and m2. The result is in terms of a Bessel function of integer order. We numerically investigate the error for a variety of cases and find that the approximation can be useful over a significant range of angles. This approximation has application in the partial wave analysis of wavepacket scattering.

  11. Determination of isoflavones in soy and selected foods containing soy by extraction, saponification, and liquid chromatography: collaborative study.

    Science.gov (United States)

    Klump, S P; Allred, M C; MacDonald, J L; Ballam, J M

    2001-01-01

    Isoflavones are biologically active compounds occurring naturally in a variety of plants, with relatively high levels found in soybeans. Twelve laboratories participated in a collaborative study to determine the aglycon isoflavone content of 8 test samples of soy and foods containing soy. The analytical method for the determination of isoflavones incorporates a mild saponification step that reduces the number of analytes measured and permits quantitation versus commercially available, stable reference standards. Test samples were extracted at 65 degrees C with methanol-water (80 + 20), saponified with dilute sodium hydroxide solution, and analyzed by reversed-phase liquid chromatography with UV detection at 260 nm. Isoflavone results were reported as microg/aglycon/g or microg aglycon equivalents/g. The 8 test samples included 2 blind duplicates and 4 single test samples with total isoflavone concentrations ranging from approximately 50 to 3000 microg/g. Test samples of soy ingredients and products made with soy were distributed to collaborators with appropriate reference standards. Collaborators were asked to analyze test samples in duplicate on 2 separate days. The data were analyzed for individual isoflavone components, subtotals of daidzin-daidzein, glycitin-glycitein, and genistin-genistein, and total isoflavones. The relative standard deviation (RSD) for repeatability was 1.8-7.1%, and the RSD for reproducibility was 3.2-16.1% for total isoflavone values of 47-3099 microg/g.

  12. Approximate Bayesian computation for forward modeling in cosmology

    International Nuclear Information System (INIS)

    Akeret, Joël; Refregier, Alexandre; Amara, Adam; Seehars, Sebastian; Hasner, Caspar

    2015-01-01

    Bayesian inference is often used in cosmology and astrophysics to derive constraints on model parameters from observations. This approach relies on the ability to compute the likelihood of the data given a choice of model parameters. In many practical situations, the likelihood function may however be unavailable or intractable due to non-gaussian errors, non-linear measurements processes, or complex data formats such as catalogs and maps. In these cases, the simulation of mock data sets can often be made through forward modeling. We discuss how Approximate Bayesian Computation (ABC) can be used in these cases to derive an approximation to the posterior constraints using simulated data sets. This technique relies on the sampling of the parameter set, a distance metric to quantify the difference between the observation and the simulations and summary statistics to compress the information in the data. We first review the principles of ABC and discuss its implementation using a Population Monte-Carlo (PMC) algorithm and the Mahalanobis distance metric. We test the performance of the implementation using a Gaussian toy model. We then apply the ABC technique to the practical case of the calibration of image simulations for wide field cosmological surveys. We find that the ABC analysis is able to provide reliable parameter constraints for this problem and is therefore a promising technique for other applications in cosmology and astrophysics. Our implementation of the ABC PMC method is made available via a public code release

  13. The non-proliferation experiment and gas sampling as an on-site inspection activity: A progress report

    International Nuclear Information System (INIS)

    Carrigan, C.R.

    1994-03-01

    The Non-proliferation Experiment (NPE) is contributing to the development of gas sampling methods and models that may be incorporated into future on-site inspection (OSI) activities. Surface gas sampling and analysis, motivated by nuclear test containment studies, have already demonstrated the tendency for the gaseous products of an underground nuclear test to flow hundreds of meters to the surface over periods ranging from days to months. Even in the presence of a uniform sinusoidal pressure variation, there will be a net flow of cavity gas toward the surface. To test this barometric pumping effect at Rainier Mesa, gas bottles containing sulfur hexaflouride and 3 He were added to the pre-detonation cavity for the 1 kt chemical explosives test. Pre-detonation measurements of the background levels of both gases were obtained at selected sites on top of the mesa. The background levels of both tracers were found to be at or below mass spectrographic/gas chromatographic sensitivity thresholds in the parts-per-trillion range. Post-detonation, gas chromatographic analyses of samples taken during barometric pressure lows from the sampling sites on the mesa indicate the presence of significant levels (300--600 ppt) of sulfur hexaflouride. However, mass spectrographic analyses of gas samples taken to date do not show the presence of 3 He. To explain these observations, several possibilities are being explored through additional sampling/analysis and numerical modeling. For the NPE, the detonation point was approximately 400 m beneath the surface of Rainier Mesa and the event did not produce significant fracturing or subsidence on the surface of the mesa. Thus, the NPE may ultimately represent an extreme, but useful example for the application and tuning of cavity gas detection techniques

  14. The buffer/container experiment: results, synthesis, issues

    International Nuclear Information System (INIS)

    Graham, J.; Chandler, N.A.; Dixon, D.A.; Roach, P.J.; To, T.; Wan, A.W.L.

    1997-12-01

    A large in-ground experiment has examined how heat affects the performance of the dense sand bentonite 'buffer' that has been proposed for use in the Canadian Nuclear Fuel Waste Management Program. The experiment was performed by Atomic Energy of Canada Limited at its Underground Research Laboratory, Lac du Bonnet, Manitoba between 1991 and 1994. The experiment placed a full-size heater representing a container of nuclear fuel waste in a 1.24-m diameter borehole filled with buffer below the floor of a room excavated at 240-m depth in granitic rock of the Canadian Shield. The buffer and surrounding rock were extensively instrumented for temperatures, total pressures, water pressures, suctions, and rock displacements. Power was provided to the heater for almost 900 days. The experiment showed that good rock conditions can be pre-selected, a borehole can be drilled, and buffer can be placed at controlled densities and water contents. The instrumentation generally worked well, and an extensive data base was successfully organized. Drying was observed in buffer close to the heater. This caused some desiccation cracking. However the cracks only extended approximately one third of the distance to the buffer-rock interface and did not form an advective pathway. Following sampling at the time of decommissioning, cracked samples of buffer were transported to the laboratory and given access to water. The hydraulic conductivities and swelling pressures of these resaturated samples were very similar to those of uncracked buffer. A good balance was achieved between the mass of water flowing into the experiment from the surrounding rock and the increased mass of water in the buffer. A good understanding was developed of the relationships between suctions, water contents, and total pressures in buffer near the buffer-rock interface. Comparisons between measurements and predictions of measured parameters show that a good understanding has been developed of the processes operating

  15. The buffer/container experiment: results, synthesis, issues

    Energy Technology Data Exchange (ETDEWEB)

    Graham, J. [Univ. of Manitoba, Dept. of Civil Engineering, Winnipeg, MB (Canada); Chandler, N.A.; Dixon, D.A.; Roach, P.J.; To, T.; Wan, A.W.L

    1997-12-01

    A large in-ground experiment has examined how heat affects the performance of the dense sand bentonite 'buffer' that has been proposed for use in the Canadian Nuclear Fuel Waste Management Program. The experiment was performed by Atomic Energy of Canada Limited at its Underground Research Laboratory, Lac du Bonnet, Manitoba between 1991 and 1994. The experiment placed a full-size heater representing a container of nuclear fuel waste in a 1.24-m diameter borehole filled with buffer below the floor of a room excavated at 240-m depth in granitic rock of the Canadian Shield. The buffer and surrounding rock were extensively instrumented for temperatures, total pressures, water pressures, suctions, and rock displacements. Power was provided to the heater for almost 900 days. The experiment showed that good rock conditions can be pre-selected, a borehole can be drilled, and buffer can be placed at controlled densities and water contents. The instrumentation generally worked well, and an extensive data base was successfully organized. Drying was observed in buffer close to the heater. This caused some desiccation cracking. However the cracks only extended approximately one third of the distance to the buffer-rock interface and did not form an advective pathway. Following sampling at the time of decommissioning, cracked samples of buffer were transported to the laboratory and given access to water. The hydraulic conductivities and swelling pressures of these resaturated samples were very similar to those of uncracked buffer. A good balance was achieved between the mass of water flowing into the experiment from the surrounding rock and the increased mass of water in the buffer. A good understanding was developed of the relationships between suctions, water contents, and total pressures in buffer near the buffer-rock interface. Comparisons between measurements and predictions of measured parameters show that a good understanding has been developed of the processes

  16. Approximation Properties of Certain Summation Integral Type Operators

    Directory of Open Access Journals (Sweden)

    Patel P.

    2015-03-01

    Full Text Available In the present paper, we study approximation properties of a family of linear positive operators and establish direct results, asymptotic formula, rate of convergence, weighted approximation theorem, inverse theorem and better approximation for this family of linear positive operators.

  17. Semiclassical initial value approximation for Green's function.

    Science.gov (United States)

    Kay, Kenneth G

    2010-06-28

    A semiclassical initial value approximation is obtained for the energy-dependent Green's function. For a system with f degrees of freedom the Green's function expression has the form of a (2f-1)-dimensional integral over points on the energy surface and an integral over time along classical trajectories initiated from these points. This approximation is derived by requiring an integral ansatz for Green's function to reduce to Gutzwiller's semiclassical formula when the integrations are performed by the stationary phase method. A simpler approximation is also derived involving only an (f-1)-dimensional integral over momentum variables on a Poincare surface and an integral over time. The relationship between the present expressions and an earlier initial value approximation for energy eigenfunctions is explored. Numerical tests for two-dimensional systems indicate that good accuracy can be obtained from the initial value Green's function for calculations of autocorrelation spectra and time-independent wave functions. The relative advantages of initial value approximations for the energy-dependent Green's function and the time-dependent propagator are discussed.

  18. Storage Effects on Sample Integrity of Environmental Surface Sampling Specimens with Bacillus anthracis Spores.

    Science.gov (United States)

    Perry, K Allison; O'Connell, Heather A; Rose, Laura J; Noble-Wang, Judith A; Arduino, Matthew J

    The effect of packaging, shipping temperatures and storage times on recovery of Bacillus anthracis . Sterne spores from swabs was investigated. Macrofoam swabs were pre-moistened, inoculated with Bacillus anthracis spores, and packaged in primary containment or secondary containment before storage at -15°C, 5°C, 21°C, or 35°C for 0-7 days. Swabs were processed according to validated Centers for Disease Control/Laboratory Response Network culture protocols, and the percent recovery relative to a reference sample (T 0 ) was determined for each variable. No differences were observed in recovery between swabs held at -15° and 5°C, (p ≥ 0.23). These two temperatures provided significantly better recovery than swabs held at 21°C or 35°C (all 7 days pooled, p ≤ 0.04). The percent recovery at 5°C was not significantly different if processed on days 1, 2 or 4, but was significantly lower on day 7 (day 2 vs. 7, 5°C, 10 2 , p=0.03). Secondary containment provided significantly better percent recovery than primary containment, regardless of storage time (5°C data, p ≤ 0.008). The integrity of environmental swab samples containing Bacillus anthracis spores shipped in secondary containment was maintained when stored at -15°C or 5°C and processed within 4 days to yield the optimum percent recovery of spores.

  19. The adiabatic approximation in multichannel scattering

    International Nuclear Information System (INIS)

    Schulte, A.M.

    1978-01-01

    Using two-dimensional models, an attempt has been made to get an impression of the conditions of validity of the adiabatic approximation. For a nucleon bound to a rotating nucleus the Coriolis coupling is neglected and the relation between this nuclear Coriolis coupling and the classical Coriolis force has been examined. The approximation for particle scattering from an axially symmetric rotating nucleus based on a short duration of the collision, has been combined with an approximation based on the limitation of angular momentum transfer between particle and nucleus. Numerical calculations demonstrate the validity of the new combined method. The concept of time duration for quantum mechanical collisions has also been studied, as has the collective description of permanently deformed nuclei. (C.F.)

  20. Minimal entropy approximation for cellular automata

    International Nuclear Information System (INIS)

    Fukś, Henryk

    2014-01-01

    We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim. (paper)