Recent bibliography on analytical and sampling problems of a PWR primary coolant
International Nuclear Information System (INIS)
Illy, H.
1980-07-01
An extensive bibliography on the problems of analysis and sampling of the primary cooling water of PWRs is presented. The aim was to collect the analytical methods for dissolved gases. The sampling and preparation are also taken into account. last 8-10 years is included. The bibliography is arranged into alphabetical order by topics. The most important topics are as follows: boric acid, gas analysis, hydrogen isotopes, iodine, noble gases, radiation monitoring, sampling and preparation, water chemistry. (R.J.)
Recent bibliography on analytical and sampling problems of a PWR primary coolant Suppl. 4
International Nuclear Information System (INIS)
Illy, H.
1986-09-01
The 4th supplement of a bibliographical series comprising the analytical and sampling problems of the primary coolant of PWR type reactors covers the literature from 1985 up to July 1986 (220 items). References are listed according to the following topics: boric acid; chloride, chlorine; general; hydrogen isotopes; iodine; iodide; noble gases; oxygen; other elements; radiation monitoring; reactor safety; sampling; water chemistry. (V.N.)
Recent bibliography on analytical and sampling problems of a PWR primary coolant Pt. 1
International Nuclear Information System (INIS)
Illy, H.
1981-12-01
The first bibliography on analytical and sampling problems of a PWR primary coolant (KFKI Report-1980-48) was published in 1980 and it covered the literature published in the previous 8-10 years. The present supplement reviews the subsequent literature up till December 1981. It also includes some references overlooked in the first volume. The serial numbers are continued from the first bibliography. (author)
Recent bibliography on analytical and sampling problems of a PWR primary coolant Suppl. 3
International Nuclear Information System (INIS)
Illy, H.
1985-03-01
The present supplement to the bibliography on analytical and sampling problems of PWR primary coolant covers the literature published in 1984 and includes some references overlooked in the previous volumes dealing with the publications of the last 10 years. References are devided into topics characterized by the following headlines: boric acid; chloride; chlorine; carbon dioxide; general; gas analysis; hydrogen isotopes; iodine; iodide; nitrogen; noble gases and radium; ammonia; ammonium; oxygen; other elements; radiation monitoring; reactor safety; sampling; water chemistry. Under a given subject bibliographical information is listed in alphabetical order of the authors. (V.N.)
Analytical and sampling problems in primary coolant circuits of PWR-type reactors
International Nuclear Information System (INIS)
Illy, H.
1980-10-01
Details of recent analytical methods on the analysis and sampling of a PWR primary coolant are given in the order as follows: sampling and preparation; analysis of the gases dissolved in the water; monitoring of radiating substances; checking of boric acid concentration which controls the reactivity. The bibliography of this work and directions for its use are published in a separate report: KFKI-80-48 (1980). (author)
International Nuclear Information System (INIS)
Finsterle, S.
1997-11-01
This report contains a collection of ITOUGH2 sample problems. It complements the ITOUGH2 User's Guide [Finsterle, 1997a], and the ITOUGH2 Command Reference [Finsterle, 1997b]. ITOUGH2 is a program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis. It is based on the TOUGH2 simulator for non-isothermal multiphase flow in fractured and porous media [Preuss, 1987, 1991a]. The report ITOUGH2 User's Guide [Finsterle, 1997a] describes the inverse modeling framework and provides the theoretical background. The report ITOUGH2 Command Reference [Finsterle, 1997b] contains the syntax of all ITOUGH2 commands. This report describes a variety of sample problems solved by ITOUGH2. Table 1.1 contains a short description of the seven sample problems discussed in this report. The TOUGH2 equation-of-state (EOS) module that needs to be linked to ITOUGH2 is also indicated. Each sample problem focuses on a few selected issues shown in Table 1.2. ITOUGH2 input features and the usage of program options are described. Furthermore, interpretations of selected inverse modeling results are given. Problem 1 is a multipart tutorial, describing basic ITOUGH2 input files for the main ITOUGH2 application modes; no interpretation of results is given. Problem 2 focuses on non-uniqueness, residual analysis, and correlation structure. Problem 3 illustrates a variety of parameter and observation types, and describes parameter selection strategies. Problem 4 compares the performance of minimization algorithms and discusses model identification. Problem 5 explains how to set up a combined inversion of steady-state and transient data. Problem 6 provides a detailed residual and error analysis. Finally, Problem 7 illustrates how the estimation of model-related parameters may help compensate for errors in that model
Toxaphene: a challenging analytical problem
de Geus, H.J.; Wester, P.G.; Schelvis, A.; de Boer, J.; Brinkman, U.A.T.
2000-01-01
The analysis of toxaphene, a highly complex mixture of chlorinated bornanes, bornenes and camphenes, is a challenging problem, especially as individual congeners are present at trace levels in biota and other relevant samples. The complicated nomenclature of the compounds of interest is briefly
Analytical Chemistry Division's sample transaction system
International Nuclear Information System (INIS)
Stanton, J.S.; Tilson, P.A.
1980-10-01
The Analytical Chemistry Division uses the DECsystem-10 computer for a wide range of tasks: sample management, timekeeping, quality assurance, and data calculation. This document describes the features and operating characteristics of many of the computer programs used by the Division. The descriptions are divided into chapters which cover all of the information about one aspect of the Analytical Chemistry Division's computer processing
Sampling Large Graphs for Anticipatory Analytics
2015-05-15
low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges
Contemporary sample stacking in analytical electrophoresis
Czech Academy of Sciences Publication Activity Database
Šlampová, Andrea; Malá, Zdeňka; Pantůčková, Pavla; Gebauer, Petr; Boček, Petr
2013-01-01
Roč. 34, č. 1 (2013), s. 3-18 ISSN 0173-0835 R&D Projects: GA ČR GAP206/10/1219 Institutional support: RVO:68081715 Keywords : biological samples * stacking * trace analysis * zone electrophoresis Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 3.161, year: 2013
Contemporary sample stacking in analytical electrophoresis
Czech Academy of Sciences Publication Activity Database
Malá, Zdeňka; Šlampová, Andrea; Křivánková, Ludmila; Gebauer, Petr; Boček, Petr
2015-01-01
Roč. 36, č. 1 (2015), s. 15-35 ISSN 0173-0835 R&D Projects: GA ČR(CZ) GA13-05762S Institutional support: RVO:68081715 Keywords : biological samples * stacking * trace analysis * zone electrophoresis Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 2.482, year: 2015
Analytical laboratory and mobile sampling platform
International Nuclear Information System (INIS)
Stetzenbach, K.; Smiecinski, A.
1996-01-01
This is the final report for the Analytical Laboratory and Mobile Sampling Platform project. This report contains only major findings and conclusions resulting from this project. Detailed reports of all activities performed for this project were provided to the Project Office every quarter since the beginning of the project. This report contains water chemistry data for samples collected in the Nevada section of Death Valley National Park (Triangle Area Springs), Nevada Test Site springs, Pahranagat Valley springs, Nevada Test Site wells, Spring Mountain springs and Crater Flat and Amargosa Valley wells
Analytical solutions to matrix diffusion problems
Energy Technology Data Exchange (ETDEWEB)
Kekäläinen, Pekka, E-mail: pekka.kekalainen@helsinki.fi [Laboratory of Radiochemistry, Department of Chemistry, P.O. Box 55, FIN-00014 University of Helsinki (Finland)
2014-10-06
We report an analytical method to solve in a few cases of practical interest the equations which have traditionally been proposed for the matrix diffusion problem. In matrix diffusion, elements dissolved in ground water can penetrate the porous rock surronuding the advective flow paths. In the context of radioactive waste repositories this phenomenon provides a mechanism by which the area of rock surface in contact with advecting elements is greatly enhanced, and can thus be an important delay mechanism. The cases solved are relevant for laboratory as well for in situ experiments. Solutions are given as integral representations well suited for easy numerical solution.
Multidimensional integral representations problems of analytic continuation
Kytmanov, Alexander M
2015-01-01
The monograph is devoted to integral representations for holomorphic functions in several complex variables, such as Bochner-Martinelli, Cauchy-Fantappiè, Koppelman, multidimensional logarithmic residue etc., and their boundary properties. The applications considered are problems of analytic continuation of functions from the boundary of a bounded domain in C^n. In contrast to the well-known Hartogs-Bochner theorem, this book investigates functions with the one-dimensional property of holomorphic extension along complex lines, and includes the problems of receiving multidimensional boundary analogs of the Morera theorem. This book is a valuable resource for specialists in complex analysis, theoretical physics, as well as graduate and postgraduate students with an understanding of standard university courses in complex, real and functional analysis, as well as algebra and geometry.
The problems of accountable and analytical procuring of enterprise management
Directory of Open Access Journals (Sweden)
Kovalova Tatiana Volodymyrivna
2016-02-01
Full Text Available This article investigated main aspects of accountable and analytical procuring of enterprise management. It was found essence of accountable and analytical procuring of enterprise management, purpose, functions and tasks. It was determined main elements and essence of accountable and analytical information taking into consideration needs of modern management. In the article are exposed structural elements of accountable and analytical procuring. It was formed conceptual approaches of building accountable and analytical procuring of enterprise management. It was analyzed main problems of improving accountable and analytical informational procuring of taking managerial decisions with the aim of solving economic problems due to current situation of national economy.
Ball assisted device for analytical surface sampling
ElNaggar, Mariam S; Van Berkel, Gary J; Covey, Thomas R
2015-11-03
A system for sampling a surface includes a sampling probe having a housing and a socket, and a rolling sampling sphere within the socket. The housing has a sampling fluid supply conduit and a sampling fluid exhaust conduit. The sampling fluid supply conduit supplies sampling fluid to the sampling sphere. The sampling fluid exhaust conduit has an inlet opening for receiving sampling fluid carried from the surface by the sampling sphere. A surface sampling probe and a method for sampling a surface are also disclosed.
Analytical methods for heat transfer and fluid flow problems
Weigand, Bernhard
2015-01-01
This book describes useful analytical methods by applying them to real-world problems rather than solving the usual over-simplified classroom problems. The book demonstrates the applicability of analytical methods even for complex problems and guides the reader to a more intuitive understanding of approaches and solutions. Although the solution of Partial Differential Equations by numerical methods is the standard practice in industries, analytical methods are still important for the critical assessment of results derived from advanced computer simulations and the improvement of the underlying numerical techniques. Literature devoted to analytical methods, however, often focuses on theoretical and mathematical aspects and is therefore useless to most engineers. Analytical Methods for Heat Transfer and Fluid Flow Problems addresses engineers and engineering students. The second edition has been updated, the chapters on non-linear problems and on axial heat conduction problems were extended. And worked out exam...
ISCO Grab Sample Ion Chromatography Analytical Data
U.S. Environmental Protection Agency — ISCO grab samples were collected from river, wastewater treatment plant discharge, and public drinking water intakes. Samples were analyzed for major ions (ppb)...
Chapter 12. Sampling and analytical methods
International Nuclear Information System (INIS)
Busenberg, E.; Plummer, L.N.; Cook, P.G.; Solomon, D.K.; Han, L.F.; Groening, M.; Oster, H.
2006-01-01
When water samples are taken for the analysis of CFCs, regardless of the sampling method used, contamination of samples by contact with atmospheric air (with its 'high' CFC concentrations) is a major concern. This is because groundwaters usually have lower CFC concentrations than those waters which have been exposed to the modern air. Some groundwaters might not contain CFCs and, therefore, are most sensitive to trace contamination by atmospheric air. Thus, extreme precautions are needed to obtain uncontaminated samples when groundwaters, particularly those with older ages, are sampled. It is recommended at the start of any CFC investigation that samples from a CFC-free source be collected and analysed, as a check upon the sampling equipment and methodology. The CFC-free source might be a deep monitoring well or, alternatively, CFC-free water could be carefully prepared in the laboratory. It is especially important that all tubing, pumps and connection that will be used in the sampling campaign be checked in this manner
40 CFR 141.22 - Turbidity sampling and analytical requirements.
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Turbidity sampling and analytical... § 141.22 Turbidity sampling and analytical requirements. The requirements in this section apply to... the water distribution system at least once per day, for the purposes of making turbidity measurements...
Problem-based learning on quantitative analytical chemistry course
Fitri, Noor
2017-12-01
This research applies problem-based learning method on chemical quantitative analytical chemistry, so called as "Analytical Chemistry II" course, especially related to essential oil analysis. The learning outcomes of this course include aspects of understanding of lectures, the skills of applying course materials, and the ability to identify, formulate and solve chemical analysis problems. The role of study groups is quite important in improving students' learning ability and in completing independent tasks and group tasks. Thus, students are not only aware of the basic concepts of Analytical Chemistry II, but also able to understand and apply analytical concepts that have been studied to solve given analytical chemistry problems, and have the attitude and ability to work together to solve the problems. Based on the learning outcome, it can be concluded that the problem-based learning method in Analytical Chemistry II course has been proven to improve students' knowledge, skill, ability and attitude. Students are not only skilled at solving problems in analytical chemistry especially in essential oil analysis in accordance with local genius of Chemistry Department, Universitas Islam Indonesia, but also have skilled work with computer program and able to understand material and problem in English.
Eco-analytical Methodology in Environmental Problems Monitoring
Agienko, M. I.; Bondareva, E. P.; Chistyakova, G. V.; Zhironkina, O. V.; Kalinina, O. I.
2017-01-01
Among the problems common to all mankind, which solutions influence the prospects of civilization, the problem of ecological situation monitoring takes very important place. Solution of this problem requires specific methodology based on eco-analytical comprehension of global issues. Eco-analytical methodology should help searching for the optimum balance between environmental problems and accelerating scientific and technical progress. The fact that Governments, corporations, scientists and nations focus on the production and consumption of material goods cause great damage to environment. As a result, the activity of environmentalists is developing quite spontaneously, as a complement to productive activities. Therefore, the challenge posed by the environmental problems for the science is the formation of geo-analytical reasoning and the monitoring of global problems common for the whole humanity. So it is expected to find the optimal trajectory of industrial development to prevent irreversible problems in the biosphere that could stop progress of civilization.
Hanford analytical sample projections FY 1996 - FY 2001. Revision 4
Energy Technology Data Exchange (ETDEWEB)
Joyce, S.M.
1997-07-02
This document summarizes the biannual Hanford sample projections for fiscal year 1997-2001. Sample projections are based on inputs submitted to Analytical Services covering Environmental Restoration, Tank Wastes Remediation Systems, Solid Wastes, Liquid Effluents, Spent Nuclear Fuels, Transition Projects, Site Monitoring, Industrial Hygiene, Analytical Services and miscellaneous Hanford support activities. In addition to this revision, details on Laboratory scale technology (development), Sample management, and Data management activities were requested. This information will be used by the Hanford Analytical Services program and the Sample Management Working Group to assure that laboratories and resources are available and effectively utilized to meet these documented needs.
ANALYTICAL ANARCHISM: THE PROBLEM OF DEFINITION AND DEMARCATION
Konstantinov M.S.
2012-01-01
In this paper the first time in the science of our country is considered a new trend of anarchist thought - analytical anarchism. As a methodological tool used critical analysis of the key propositions of the basic versions of this trend: the anarcho- capitalist and egalitarian. The study was proposed classification of discernible trends within the analytical anarchism on the basis of value criteria, identified conceptual and methodological problems of definition analytical anarchism and its ...
Network Monitoring as a Streaming Analytics Problem
Gupta, Arpit
2016-11-02
Programmable switches make it easier to perform flexible network monitoring queries at line rate, and scalable stream processors make it possible to fuse data streams to answer more sophisticated queries about the network in real-time. Unfortunately, processing such network monitoring queries at high traffic rates requires both the switches and the stream processors to filter the traffic iteratively and adaptively so as to extract only that traffic that is of interest to the query at hand. Others have network monitoring in the context of streaming; yet, previous work has not closed the loop in a way that allows network operators to perform streaming analytics for network monitoring applications at scale. To achieve this objective, Sonata allows operators to express a network monitoring query by considering each packet as a tuple and efficiently partitioning each query between the switches and the stream processor through iterative refinement. Sonata extracts only the traffic that pertains to each query, ensuring that the stream processor can scale traffic rates of several terabits per second. We show with a simple example query involving DNS reflection attacks and traffic traces from one of the world\\'s largest IXPs that Sonata can capture 95% of all traffic pertaining to the query, while reducing the overall data rate by a factor of about 400 and the number of required counters by four orders of magnitude. Copyright 2016 ACM.
Analytical solutions of the electrostatically actuated curled beam problem
Younis, Mohammad I.
2014-01-01
This works presents analytical expressions of the electrostatically actuated initially deformed cantilever beam problem. The formulation is based on the continuous Euler-Bernoulli beam model combined with a single-mode Galerkin approximation. We
Bias Assessment of General Chemistry Analytes using Commutable Samples.
Koerbin, Gus; Tate, Jillian R; Ryan, Julie; Jones, Graham Rd; Sikaris, Ken A; Kanowski, David; Reed, Maxine; Gill, Janice; Koumantakis, George; Yen, Tina; St John, Andrew; Hickman, Peter E; Simpson, Aaron; Graham, Peter
2014-11-01
Harmonisation of reference intervals for routine general chemistry analytes has been a goal for many years. Analytical bias may prevent this harmonisation. To determine if analytical bias is present when comparing methods, the use of commutable samples, or samples that have the same properties as the clinical samples routinely analysed, should be used as reference samples to eliminate the possibility of matrix effect. The use of commutable samples has improved the identification of unacceptable analytical performance in the Netherlands and Spain. The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) has undertaken a pilot study using commutable samples in an attempt to determine not only country specific reference intervals but to make them comparable between countries. Australia and New Zealand, through the Australasian Association of Clinical Biochemists (AACB), have also undertaken an assessment of analytical bias using commutable samples and determined that of the 27 general chemistry analytes studied, 19 showed sufficiently small between method biases as to not prevent harmonisation of reference intervals. Application of evidence based approaches including the determination of analytical bias using commutable material is necessary when seeking to harmonise reference intervals.
Hanford analytical sample projections FY 1998 - FY 2002
International Nuclear Information System (INIS)
Joyce, S.M.
1998-01-01
Analytical Services projections are compiled for the Hanford site based on inputs from the major programs for the years 1998 through 2002. Projections are categorized by radiation level, protocol, sample matrix and program. Analyses requirements are also presented. This document summarizes the Hanford sample projections for fiscal years 1998 to 2002. Sample projections are based on inputs submitted to Analytical Services covering Environmental Restoration, Tank Waste Remediation Systems (TWRS), Solid Waste, Liquid Effluents, Spent Nuclear Fuels, Transition Projects, Site Monitoring, Industrial Hygiene, Analytical Services and miscellaneous Hanford support activities. In addition, details on laboratory scale technology (development) work, Sample Management, and Data Management activities are included. This information will be used by Hanford Analytical Services (HAS) and the Sample Management Working Group (SMWG) to assure that laboratories and resources are available and effectively utilized to meet these documented needs
Hanford analytical sample projections FY 1998--FY 2002
Energy Technology Data Exchange (ETDEWEB)
Joyce, S.M.
1998-02-12
Analytical Services projections are compiled for the Hanford site based on inputs from the major programs for the years 1998 through 2002. Projections are categorized by radiation level, protocol, sample matrix and program. Analyses requirements are also presented. This document summarizes the Hanford sample projections for fiscal years 1998 to 2002. Sample projections are based on inputs submitted to Analytical Services covering Environmental Restoration, Tank Waste Remediation Systems (TWRS), Solid Waste, Liquid Effluents, Spent Nuclear Fuels, Transition Projects, Site Monitoring, Industrial Hygiene, Analytical Services and miscellaneous Hanford support activities. In addition, details on laboratory scale technology (development) work, Sample Management, and Data Management activities are included. This information will be used by Hanford Analytical Services (HAS) and the Sample Management Working Group (SMWG) to assure that laboratories and resources are available and effectively utilized to meet these documented needs.
Post-analytical stability of 23 common chemistry and immunochemistry analytes in incurred samples
DEFF Research Database (Denmark)
Nielsen, Betina Klint; Frederiksen, Tina; Friis-Hansen, Lennart
2017-01-01
BACKGROUND: Storage of blood samples after centrifugation, decapping and initial sampling allows ordering of additional blood tests. The pre-analytic stability of biochemistry and immunochemistry analytes has been studied in detail, but little is known about the post-analytical stability...... in incurred samples. METHODS: We examined the stability of 23 routine analytes on the Dimension Vista® (Siemens Healthineers, Denmark): 42-60 routine samples in lithium-heparin gel tubes (Vacutainer, BD, USA) were centrifuged at 3000×g for 10min. Immediately after centrifugation, initial concentration...... of analytes were measured in duplicate (t=0). The tubes were stored decapped at room temperature and re-analyzed after 2, 4, 6, 8 and 10h in singletons. The concentration from reanalysis were normalized to initial concentration (t=0). Internal acceptance criteria for bias and total error were used...
An analytical statistical approach to the 3D reconstruction problem
Energy Technology Data Exchange (ETDEWEB)
Cierniak, Robert [Czestochowa Univ. of Technology (Poland). Inst. of Computer Engineering
2011-07-01
The presented here approach is concerned with the reconstruction problem for 3D spiral X-ray tomography. The reconstruction problem is formulated taking into considerations the statistical properties of signals obtained in X-ray CT. Additinally, image processing performed in our approach is involved in analytical methodology. This conception significantly improves quality of the obtained after reconstruction images and decreases the complexity of the reconstruction problem in comparison with other approaches. Computer simulations proved that schematically described here reconstruction algorithm outperforms conventional analytical methods in obtained image quality. (orig.)
Development of analytical techniques for safeguards environmental samples at JAEA
International Nuclear Information System (INIS)
Sakurai, Satoshi; Magara, Masaaki; Usuda, Shigekazu; Watanabe, Kazuo; Esaka, Fumitaka; Hirayama, Fumio; Lee, Chi-Gyu; Yasuda, Kenichiro; Inagawa, Jun; Suzuki, Daisuke; Iguchi, Kazunari; Kokubu, Yoko S.; Miyamoto, Yutaka; Ohzu, Akira
2007-01-01
JAEA has been developing, under the auspices of the Ministry of Education, Culture, Sports, Science and Technology of Japan, analytical techniques for ultra-trace amounts of nuclear materials in environmental samples in order to contribute to the strengthened safeguards system. Development of essential techniques for bulk and particle analysis, as well as screening, of the environmental swipe samples has been established as ultra-trace analytical methods of uranium and plutonium. In January 2003, JAEA was qualified, including its quality control system, as a member of the JAEA network analytical laboratories for environmental samples. Since 2004, JAEA has conducted the analysis of domestic and the IAEA samples, through which JAEA's analytical capability has been verified and improved. In parallel, advanced techniques have been developed in order to expand the applicability to the samples of various elemental composition and impurities and to improve analytical accuracy and efficiency. This paper summarizes the trace of the technical development in environmental sample analysis at JAEA, and refers to recent trends of research and development in this field. (author)
On accuracy problems for semi-analytical sensitivity analyses
DEFF Research Database (Denmark)
Pedersen, P.; Cheng, G.; Rasmussen, John
1989-01-01
The semi-analytical method of sensitivity analysis combines ease of implementation with computational efficiency. A major drawback to this method, however, is that severe accuracy problems have recently been reported. A complete error analysis for a beam problem with changing length is carried ou...... pseudo loads in order to obtain general load equilibrium with rigid body motions. Such a method would be readily applicable for any element type, whether analytical expressions for the element stiffnesses are available or not. This topic is postponed for a future study....
Energy Technology Data Exchange (ETDEWEB)
Gasco, C; Navarro, N; Gonzalez, P; Heras, M C; Gapan, M P; Alonso, C; Calderon, A; Sanchez, D; Morante, R; Fernandez, M; Gajate, A; Alvarez, A
2008-08-06
The Department of Vigilance Radiologica y Radiactividad Ambiental from CIEMAT has developed an appropriate analytical methodology for Fe-55 and Ni-63 sequential determination in environmental samples based on the procedure used by RIS0 Laboratories. The experimental results obtained in the mayor and minor elements behaviour (soil and air constituents) in the different types of resins used for separating Fe-55 and Ni-63 are showed in this report. The measuring method of both isotopes by scintillation counting has been optimized with Ultima Gold liquid with different concentrations of stable element Fe and Ni. The decontamination factors of different gamma-emitters are experimentally determined in this method with the presence of soil matrix. The Fe-55 and Ni-63 activity concentrations and their associated uncertainties have been calculated from the counting data and sample preparation. A computer application has been implemented in Visual Basic in excel sheets for: (I) obtaining the counting data from spectrometer and counts in each window, (II) representing graphically the background and sample spectrums, (III) determining the activity concentration and its associated uncertainty and (IV) calculating the characteristic limits using ISO 11929 (2007) with various confidence levels. (Author) 30 refs.
Sampling problems for randomly broken sticks
Energy Technology Data Exchange (ETDEWEB)
Huillet, Thierry [Laboratoire de Physique Theorique et Modelisation, CNRS-UMR 8089 et Universite de Cergy-Pontoise, 5 mail Gay-Lussac, 95031, Neuville sur Oise (France)
2003-04-11
Consider the random partitioning model of a population (represented by a stick of length 1) into n species (fragments) with identically distributed random weights (sizes). Upon ranking the fragments' weights according to ascending sizes, let S{sub m:n} be the size of the mth smallest fragment. Assume that some observer is sampling such populations as follows: drop at random k points (the sample size) onto this stick and record the corresponding numbers of visited fragments. We shall investigate the following sampling problems: (1) what is the sample size if the sampling is carried out until the first visit of the smallest fragment (size S{sub 1:n})? (2) For a given sample size, have all the fragments of the stick been visited at least once or not? This question is related to Feller's random coupon collector problem. (3) In what order are new fragments being discovered and what is the random number of samples separating the discovery of consecutive new fragments until exhaustion of the list? For this problem, the distribution of the size-biased permutation of the species' weights, as the sequence of their weights in their order of appearance is needed and studied.
A GPU code for analytic continuation through a sampling method
Directory of Open Access Journals (Sweden)
Johan Nordström
2016-01-01
Full Text Available We here present a code for performing analytic continuation of fermionic Green’s functions and self-energies as well as bosonic susceptibilities on a graphics processing unit (GPU. The code is based on the sampling method introduced by Mishchenko et al. (2000, and is written for the widely used CUDA platform from NVidia. Detailed scaling tests are presented, for two different GPUs, in order to highlight the advantages of this code with respect to standard CPU computations. Finally, as an example of possible applications, we provide the analytic continuation of model Gaussian functions, as well as more realistic test cases from many-body physics.
Use of robotic systems for radiochemical sample changing and for analytical sample preparation
International Nuclear Information System (INIS)
Delmastro, J.R.; Hartenstein, S.D.; Wade, M.A.
1989-01-01
Two uses of the Perkin-Elmer (PE) robotic system will be presented. In the first, a PE robot functions as an automatic sample changer for up to five low energy photon spectrometry (LEPS) detectors operated with a Nuclear Data ND 6700 system. The entire system, including the robot, is controlled by an IBM PC-AT using software written in compiled BASIC. Problems associated with the development of the system and modifications to the robot will be presented. In the second, an evaluation study was performed to assess the abilities of the PE robotic system for performing complex analytical sample preparation procedures. For this study, a robotic system based upon the PE robot and auxiliary devices was constructed and programmed to perform the preparation of final product samples (UO 3 ) for accountability and impurity specification analyses. These procedures require sample dissolution, dilution, and liquid-liquid extraction steps. The results of an in-depth evaluation of all system components will be presented
Waste minimization in analytical chemistry through innovative sample preparation techniques
International Nuclear Information System (INIS)
Smith, L. L.
1998-01-01
Because toxic solvents and other hazardous materials are commonly used in analytical methods, characterization procedures result in significant and costly amount of waste. We are developing alternative analytical methods in the radiological and organic areas to reduce the volume or form of the hazardous waste produced during sample analysis. For the radiological area, we have examined high-pressure, closed-vessel microwave digestion as a way to minimize waste from sample preparation operations. Heated solutions of strong mineral acids can be avoided for sample digestion by using the microwave approach. Because reactivity increases with pressure, we examined the use of less hazardous solvents to leach selected contaminants from soil for subsequent analysis. We demonstrated the feasibility of this approach by extracting plutonium from a NET reference material using citric and tartaric acids with microwave digestion. Analytical results were comparable to traditional digestion methods, while hazardous waste was reduced by a factor often. We also evaluated the suitability of other natural acids, determined the extraction performance on a wider variety of soil types, and examined the extraction efficiency of other contaminants. For the organic area, we examined ways to minimize the wastes associated with the determination of polychlorinated biphenyls (PCBs) in environmental samples. Conventional methods for analyzing semivolatile organic compounds are labor intensive and require copious amounts of hazardous solvents. For soil and sediment samples, we have a method to analyze PCBs that is based on microscale extraction using benign solvents (e.g., water or hexane). The extraction is performed at elevated temperatures in stainless steel cells containing the sample and solvent. Gas chromatography-mass spectrometry (GC/MS) was used to quantitate the analytes in the isolated extract. More recently, we developed a method utilizing solid-phase microextraction (SPME) for natural
Asbestos quantification in track ballast, a complex analytical problem
Cavallo, Alessandro
2016-04-01
Track ballast forms the trackbeb upon which railroad ties are laid. It is used to bear the load from the railroad ties, to facilitate water drainage, and also to keep down vegetation. It is typically made of angular crushed stone, with a grain size between 30 and 60 mm, with good mechanical properties (high compressive strength, freeze - thaw resistance, resistance to fragmentation). The most common rock types are represented by basalts, porphyries, orthogneisses, some carbonatic rocks and "green stones" (serpentinites, prasinites, amphibolites, metagabbros). Especially "green stones" may contain traces, and sometimes appreciable amounts of asbestiform minerals (chrysotile and/or fibrous amphiboles, generally tremolite - actinolite). In Italy, the chrysotile asbestos mine in Balangero (Turin) produced over 5 Mt railroad ballast (crushed serpentinites), which was used for the railways in northern and central Italy, from 1930 up to 1990. In addition to Balangero, several other serpentinite and prasinite quarries (e.g. Emilia Romagna) provided the railways ballast up to the year 2000. The legal threshold for asbestos content in track ballast is established in 1000 ppm: if the value is below this threshold, the material can be reused, otherwise it must be disposed of as hazardous waste, with very high costs. The quantitative asbestos determination in rocks is a very complex analytical issue: although techniques like TEM-SAED and micro-Raman are very effective in the identification of asbestos minerals, a quantitative determination on bulk materials is almost impossible or really expensive and time consuming. Another problem is represented by the discrimination of asbestiform minerals (e.g. chrysotile, asbestiform amphiboles) from the common acicular - pseudo-fibrous varieties (lamellar serpentine minerals, prismatic/acicular amphiboles). In this work, more than 200 samples from the main Italian rail yards were characterized by a combined use of XRD and a special SEM
Analytic semigroups and optimal regularity in parabolic problems
Lunardi, Alessandra
2012-01-01
The book shows how the abstract methods of analytic semigroups and evolution equations in Banach spaces can be fruitfully applied to the study of parabolic problems. Particular attention is paid to optimal regularity results in linear equations. Furthermore, these results are used to study several other problems, especially fully nonlinear ones. Owing to the new unified approach chosen, known theorems are presented from a novel perspective and new results are derived. The book is self-contained. It is addressed to PhD students and researchers interested in abstract evolution equations and in p
Analytical Evaluation of Beam Deformation Problem Using Approximate Methods
DEFF Research Database (Denmark)
Barari, Amin; Kimiaeifar, A.; Domairry, G.
2010-01-01
The beam deformation equation has very wide applications in structural engineering. As a differential equation, it has its own problem concerning existence, uniqueness and methods of solutions. Often, original forms of governing differential equations used in engineering problems are simplified......, and this process produces noise in the obtained answers. This paper deals with the solution of second order of differential equation governing beam deformation using four analytical approximate methods, namely the Perturbation, Homotopy Perturbation Method (HPM), Homotopy Analysis Method (HAM) and Variational...... Iteration Method (VIM). The comparisons of the results reveal that these methods are very effective, convenient and quite accurate for systems of non-linear differential equation....
A semi-analytical iterative technique for solving chemistry problems
Directory of Open Access Journals (Sweden)
Majeed Ahmed AL-Jawary
2017-07-01
Full Text Available The main aim and contribution of the current paper is to implement a semi-analytical iterative method suggested by Temimi and Ansari in 2011 namely (TAM to solve two chemical problems. An approximate solution obtained by the TAM provides fast convergence. The current chemical problems are the absorption of carbon dioxide into phenyl glycidyl ether and the other system is a chemical kinetics problem. These problems are represented by systems of nonlinear ordinary differential equations that contain boundary conditions and initial conditions. Error analysis of the approximate solutions is studied using the error remainder and the maximal error remainder. Exponential rate for the convergence is observed. For both problems the results of the TAM are compared with other results obtained by previous methods available in the literature. The results demonstrate that the method has many merits such as being derivative-free, and overcoming the difficulty arising in calculating Adomian polynomials to handle the non-linear terms in Adomian Decomposition Method (ADM. It does not require to calculate Lagrange multiplier in Variational Iteration Method (VIM in which the terms of the sequence become complex after several iterations, thus, analytical evaluation of terms becomes very difficult or impossible in VIM. No need to construct a homotopy in Homotopy Perturbation Method (HPM and solve the corresponding algebraic equations. The MATHEMATICA® 9 software was used to evaluate terms in the iterative process.
Sampling and analyte enrichment strategies for ambient mass spectrometry.
Li, Xianjiang; Ma, Wen; Li, Hongmei; Ai, Wanpeng; Bai, Yu; Liu, Huwei
2018-01-01
Ambient mass spectrometry provides great convenience for fast screening, and has showed promising potential in analytical chemistry. However, its relatively low sensitivity seriously restricts its practical utility in trace compound analysis. In this review, we summarize the sampling and analyte enrichment strategies coupled with nine modes of representative ambient mass spectrometry (desorption electrospray ionization, paper vhspray ionization, wooden-tip spray ionization, probe electrospray ionization, coated blade spray ionization, direct analysis in real time, desorption corona beam ionization, dielectric barrier discharge ionization, and atmospheric-pressure solids analysis probe) that have dramatically increased the detection sensitivity. We believe that these advances will promote routine use of ambient mass spectrometry. Graphical abstract Scheme of sampling stretagies for ambient mass spectrometry.
Zhang, Qianchun; Luo, Xialin; Li, Gongke; Xiao, Xiaohua
2015-09-01
Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed.
Analytic Solution to Shell Boundary – Value Problems
Directory of Open Access Journals (Sweden)
Yu. I. Vinogradov
2015-01-01
Full Text Available Object of research is to find analytical solution to the shell boundary – value problems, i.e. to consider the solution for a class of problems concerning the mechanics of hoop closed shells strain.The objective of work is to create an analytical method to define a stress – strain state of shells under non-axisymmetric loading. Thus, a main goal is to derive the formulas – solutions of the linear ordinary differential equations with variable continuous coefficients.The partial derivative differential equations of mechanics of shells strain by Fourier's method of variables division are reduced to the system of the differential equations with ordinary derivatives. The paper presents the obtained formulas to define solutions of the uniform differential equations and received on their basis formulas to define a particular solution depending on a type of the right parts of the differential equations.The analytical algorithm of the solution of a boundary task uses an approach to transfer the boundary conditions to the randomly chosen point of an interval of changing independent variable through the solution of the canonical matrix ordinary differential equation with the subsequent solution of system of algebraic equations for compatibility of boundary conditions at this point. Efficiency of algorithm is based on the fact that the solution of the ordinary differential equations is defined as the values of Cauchy – Krylova functions, which meet initial arbitrary conditions.The results of researches presented in work are useful to experts in the field of calculus mathematics, dealing with solution of systems of linear ordinary differential equations and creation of effective analytical computing methods to solve shell boundary – value problems.
Sampling analytical tests and destructive tests for quality assurance
International Nuclear Information System (INIS)
Saas, A.; Pasquini, S.; Jouan, A.; Angelis, de; Hreen Taywood, H.; Odoj, R.
1990-01-01
In the context of the third programme of the European Communities on the monitoring of radioactive waste, various methods have been developed for the performance of sampling and measuring tests on encapsulated waste of low and medium level activity, on the one hand, and of high level activity, on the other hand. The purpose was to provide better quality assurance for products to be stored on an interim or long-term basis. Various testing sampling means are proposed such as: - sampling of raw waste before conditioning and determination of the representative aliquot, - sampling of encapsulated waste on process output, - sampling of core specimens subjected to measurement before and after cutting. Equipment suitable for these sampling procedures have been developed and, in the case of core samples, a comparison of techniques has been made. The results are described for the various analytical tests carried out on the samples such as: - mechanical tests, - radiation resistance, - fire resistance, - lixiviation, - determination of free water, - biodegradation, - water resistance, - chemical and radiochemical analysis. Every time it was possible, these tests were compared with non-destructive tests on full-scale packages and some correlations are given. This word has made if possible to improve and clarify sample optimization, with fine sampling techniques and methodologies and draw up characterization procedures. It also provided an occasion for a first collaboration between the laboratories responsible for these studies and which will be furthered in the scope of the 1990-1994 programme
Analytical characterization of high-level mixed wastes using multiple sample preparation treatments
International Nuclear Information System (INIS)
King, A.G.; Baldwin, D.L.; Urie, M.W.; McKinley, S.G.
1994-01-01
The Analytical Chemistry Laboratory at the Pacific Northwest Laboratory in Richland, Washington, is actively involved in performing analytical characterization of high-level mixed waste from Hanford's single shell and double shell tank characterization programs. A full suite of analyses is typically performed on homogenized tank core samples. These analytical techniques include inductively-coupled plasma-atomic emission spectroscopy, total organic carbon methods and radiochemistry methods, as well as many others, all requiring some type of remote sample-preparation treatment to solubilize the tank sludge material for analysis. Most of these analytical methods typically use a single sample-preparation treatment, inherently providing elemental information only. To better understand and interpret tank chemistry and assist in identifying chemical compounds, selected analytical methods are performed using multiple sample-preparation treatments. The sample preparation treatments used at Pacific Northwest Laboratory for this work with high-level mixed waste include caustic fusion, acid digestion, and water leach. The type of information available by comparing results from different sample-prep treatments includes evidence for the presence of refractory compounds, acid-soluble compounds, or water-soluble compounds. Problems unique to the analysis of Hanford tank wastes are discussed. Selected results from the Hanford single shell ferrocyanide tank, 241-C-109, are presented, and the resulting conclusions are discussed
An analytical approach to managing complex process problems
Energy Technology Data Exchange (ETDEWEB)
Ramstad, Kari; Andersen, Espen; Rohde, Hans Christian; Tydal, Trine
2006-03-15
The oil companies are continuously investing time and money to ensure optimum regularity on their production facilities. High regularity increases profitability, reduces workload on the offshore organisation and most important; - reduces discharge to air and sea. There are a number of mechanisms and tools available in order to achieve high regularity. Most of these are related to maintenance, system integrity, well operations and process conditions. However, for all of these tools, they will only be effective if quick and proper analysis of fluids and deposits are carried out. In fact, analytical backup is a powerful tool used to maintain optimised oil production, and should as such be given high priority. The present Operator (Hydro Oil and Energy) and the Chemical Supplier (MI Production Chemicals) have developed a cooperation to ensure that analytical backup is provided efficiently to the offshore installations. The Operator's Research and Development (R and D) departments and the Chemical Supplier have complementary specialties in both personnel and equipment, and this is utilized to give the best possible service when required from production technologists or operations. In order for the Operator's Research departments, Health, Safety and Environment (HSE) departments and Operations to approve analytical work performed by the Chemical Supplier, a number of analytical tests are carried out following procedures agreed by both companies. In the present paper, three field case examples of analytical cooperation for managing process problems will be presented. 1) Deposition in a Complex Platform Processing System. 2) Contaminated Production Chemicals. 3) Improved Monitoring of Scale Inhibitor, Suspended Solids and Ions. In each case the Research Centre, Operations and the Chemical Supplier have worked closely together to achieve fast solutions and Best Practice. (author) (tk)
The boundary value problem for discrete analytic functions
Skopenkov, Mikhail
2013-06-01
This paper is on further development of discrete complex analysis introduced by R.Isaacs, J.Ferrand, R.Duffin, and C.Mercat. We consider a graph lying in the complex plane and having quadrilateral faces. A function on the vertices is called discrete analytic, if for each face the difference quotients along the two diagonals are equal.We prove that the Dirichlet boundary value problem for the real part of a discrete analytic function has a unique solution. In the case when each face has orthogonal diagonals we prove that this solution uniformly converges to a harmonic function in the scaling limit. This solves a problem of S.Smirnov from 2010. This was proved earlier by R.Courant-K.Friedrichs-H.Lewy and L.Lusternik for square lattices, by D.Chelkak-S.Smirnov and implicitly by P.G.Ciarlet-P.-A.Raviart for rhombic lattices.In particular, our result implies uniform convergence of the finite element method on Delaunay triangulations. This solves a problem of A.Bobenko from 2011. The methodology is based on energy estimates inspired by alternating-current network theory. © 2013 Elsevier Ltd.
Analytic continuation of quantum Monte Carlo data. Stochastic sampling method
Energy Technology Data Exchange (ETDEWEB)
Ghanem, Khaldoon; Koch, Erik [Institute for Advanced Simulation, Forschungszentrum Juelich, 52425 Juelich (Germany)
2016-07-01
We apply Bayesian inference to the analytic continuation of quantum Monte Carlo (QMC) data from the imaginary axis to the real axis. Demanding a proper functional Bayesian formulation of any analytic continuation method leads naturally to the stochastic sampling method (StochS) as the Bayesian method with the simplest prior, while it excludes the maximum entropy method and Tikhonov regularization. We present a new efficient algorithm for performing StochS that reduces computational times by orders of magnitude in comparison to earlier StochS methods. We apply the new algorithm to a wide variety of typical test cases: spectral functions and susceptibilities from DMFT and lattice QMC calculations. Results show that StochS performs well and is able to resolve sharp features in the spectrum.
Negative dielectrophoresis spectroscopy for rare analyte quantification in biological samples
Kirmani, Syed Abdul Mannan; Gudagunti, Fleming Dackson; Velmanickam, Logeeshan; Nawarathna, Dharmakeerthi; Lima, Ivan T., Jr.
2017-03-01
We propose the use of negative dielectrophoresis (DEP) spectroscopy as a technique to improve the detection limit of rare analytes in biological samples. We observe a significant dependence of the negative DEP force on functionalized polystyrene beads at the edges of interdigitated electrodes with respect to the frequency of the electric field. We measured this velocity of repulsion for 0% and 0.8% conjugation of avidin with biotin functionalized polystyrene beads with our automated software through real-time image processing that monitors the Rayleigh scattering from the beads. A significant difference in the velocity of the beads was observed in the presence of as little as 80 molecules of avidin per biotin functionalized bead. This technology can be applied in the detection and quantification of rare analytes that can be useful in the diagnosis and the treatment of diseases, such as cancer and myocardial infarction, with the use of polystyrene beads functionalized with antibodies for the target biomarkers.
Sasaki, M; Hashimoto, E
1993-07-01
In the field of clinical chemistry of Japan, the automation of analytical instruments first appeared in the 1960's with the rapid developments in electronics industry. After a series of improvements and modifications in the past thirty years, these analytical instruments became excellent with multifunctions. From the results of these developments, it is now well recognized that automated analytical instruments are indispensable to manage the modern clinical Laboratory. On the other hand, these automated analytical instruments uncovered the various problems which had been hitherto undetected when the manually-operated instruments were used. For instances, the variation of commercially available standard solutions due to the lack of government control causes the different values obtained in institutions. In addition, there are many problems such as a shortage of medical technologists, a complication to handle the sampling and an increased labor costs. Furthermore, the inadequacies in maintenance activities cause the frequent erroneous reports of laboratory findings in spite of the latest and efficient analytical instruments equipped. Thus, the working process in clinical laboratory must be systematized to create the rapidity and the effectiveness. In the present report, we review the developmental history of automation system for analytical instruments, discuss the problems to create the effective clinical laboratory and explore the ways to deal with these emerging issues for the automation technology in clinical laboratory.
Analytical simulation of RBS spectra of nanowire samples
Energy Technology Data Exchange (ETDEWEB)
Barradas, Nuno P., E-mail: nunoni@ctn.ist.utl.pt [Centro de Ciências e Tecnologias Nucleares, Instituto Superior Técnico, Universidade de Lisboa, E.N. 10 ao km 139,7, 2695-066 Bobadela LRS (Portugal); García Núñez, C. [Laboratorio de Electrónica y Semiconductores, Departamento de Física Aplicada, Universidad Autónoma de Madrid, 28049 Madrid (Spain); Redondo-Cubero, A. [Laboratorio de Electrónica y Semiconductores, Departamento de Física Aplicada, Universidad Autónoma de Madrid, 28049 Madrid (Spain); Centro de Micro-Análisis de Materiales, Universidad Autónoma de Madrid, 28049 Madrid (Spain); Shen, G.; Kung, P. [Department of Electrical and Computer Engineering, The University of Alabama, AL 35487 (United States); Pau, J.L. [Laboratorio de Electrónica y Semiconductores, Departamento de Física Aplicada, Universidad Autónoma de Madrid, 28049 Madrid (Spain)
2016-03-15
Almost all, if not all, general purpose codes for analysis of Ion Beam Analysis data have been originally developed to handle laterally homogeneous samples only. This is the case of RUMP, NDF, SIMNRA, and even of the Monte Carlo code Corteo. General-purpose codes usually include only limited support for lateral inhomogeneity. In this work, we show analytical simulations of samples that consist of a layer of parallel oriented nanowires on a substrate, using a model implemented in NDF. We apply the code to real samples, made of vertical ZnO nanowires on a sapphire substrate. Two configurations of the nanowires were studied: 40 nm diameter, 4.1 μm height, 3.5% surface coverage; and 55 nm diameter, 1.1 μm height, 42% surface coverage. We discuss the accuracy and limits of applicability of the analysis.
Interpolation and sampling in spaces of analytic functions
Seip, Kristian
2004-01-01
The book is about understanding the geometry of interpolating and sampling sequences in classical spaces of analytic functions. The subject can be viewed as arising from three classical topics: Nevanlinna-Pick interpolation, Carleson's interpolation theorem for H^\\infty, and the sampling theorem, also known as the Whittaker-Kotelnikov-Shannon theorem. The book aims at clarifying how certain basic properties of the space at hand are reflected in the geometry of interpolating and sampling sequences. Key words for the geometric descriptions are Carleson measures, Beurling densities, the Nyquist rate, and the Helson-Szegő condition. The book is based on six lectures given by the author at the University of Michigan. This is reflected in the exposition, which is a blend of informal explanations with technical details. The book is essentially self-contained. There is an underlying assumption that the reader has a basic knowledge of complex and functional analysis. Beyond that, the reader should have some familiari...
Analytical solutions of the electrostatically actuated curled beam problem
Younis, Mohammad I.
2014-07-24
This works presents analytical expressions of the electrostatically actuated initially deformed cantilever beam problem. The formulation is based on the continuous Euler-Bernoulli beam model combined with a single-mode Galerkin approximation. We derive simple analytical expressions for two commonly observed deformed beams configurations: the curled and tilted configurations. The derived analytical formulas are validated by comparing their results to experimental data and numerical results of a multi-mode reduced order model. The derived expressions do not involve any complicated integrals or complex terms and can be conveniently used by designers for quick, yet accurate, estimations. The formulas are found to yield accurate results for most commonly encountered microbeams of initial tip deflections of few microns. For largely deformed beams, we found that these formulas yield less accurate results due to the limitations of the single-mode approximation. In such cases, multi-mode reduced order models are shown to yield accurate results. © 2014 Springer-Verlag Berlin Heidelberg.
Privacy problems in the small sample selection
Directory of Open Access Journals (Sweden)
Loredana Cerbara
2013-05-01
Full Text Available The side of social research that uses small samples for the production of micro data, today finds some operating difficulties due to the privacy law. The privacy code is a really important and necessary law because it guarantees the Italian citizen’s rights, as already happens in other Countries of the world. However it does not seem appropriate to limit once more the possibilities of the data production of the national centres of research. That possibilities are already moreover compromised due to insufficient founds is a common problem becoming more and more frequent in the research field. It would be necessary, therefore, to include in the law the possibility to use telephonic lists to select samples useful for activities directly of interest and importance to the citizen, such as the collection of the data carried out on the basis of opinion polls by the centres of research of the Italian CNR and some universities.
Paper Capillary Enables Effective Sampling for Microfluidic Paper Analytical Devices.
Shangguan, Jin-Wen; Liu, Yu; Wang, Sha; Hou, Yun-Xuan; Xu, Bi-Yi; Xu, Jing-Juan; Chen, Hong-Yuan
2018-06-06
Paper capillary is introduced to enable effective sampling on microfluidic paper analytical devices. By coupling mac-roscale capillary force of paper capillary and microscale capillary forces of native paper, fluid transport can be flexibly tailored with proper design. Subsequently, a hybrid-fluid-mode paper capillary device was proposed, which enables fast and reliable sampling in an arrayed form, with less surface adsorption and bias for different components. The resulting device thus well supports high throughput, quantitative, and repeatable assays all by hands operation. With all these merits, multiplex analysis of ions, proteins, and microbe have all been realized on this platform, which has paved the way to level-up analysis on μPADs.
Nuclear analytical techniques and their application to environmental samples
International Nuclear Information System (INIS)
Lieser, K.H.
1986-01-01
A survey is given on nuclear analytical techniques and their application to environmental samples. Measurement of the inherent radioactivity of elements or radionuclides allows determination of natural radioelements (e.g. Ra), man-made radioelements (e.g. Pu) and radionuclides in the environment. Activation analysis, in particular instrumental neutron activation analysis, is a very reliable and sensitive method for determination of a great number of trace elements in environmental samples, because the most abundant main constituents are not activated. Tracer techniques are very useful for studies of the behaviour and of chemical reactions of trace elements and compounds in the environment. Radioactive sources are mainly applied for excitation of characteristic X-rays (X-ray fluorescence analysis). (author)
Analytical artefacts in the speciation of arsenic in clinical samples
International Nuclear Information System (INIS)
Slejkovec, Zdenka; Falnoga, Ingrid; Goessler, Walter; Elteren, Johannes T. van; Raml, Reingard; Podgornik, Helena; Cernelc, Peter
2008-01-01
Urine and blood samples of cancer patients, treated with high doses of arsenic trioxide were analysed for arsenic species using HPLC-HGAFS and, in some cases, HPLC-ICPMS. Total arsenic was determined with either flow injection-HGAFS in urine or radiochemical neutron activation analysis in blood fractions (in serum/plasma, blood cells). The total arsenic concentrations (during prolonged, daily/weekly arsenic trioxide therapy) were in the μg mL -1 range for urine and in the ng g -1 range for blood fractions. The main arsenic species found in urine were As(III), MA and DMA and in blood As(V), MA and DMA. With proper sample preparation and storage of urine (no preservation agents/storage in liquid nitrogen) no analytical artefacts were observed and absence of significant amounts of alleged trivalent metabolites was proven. On the contrary, in blood samples a certain amount of arsenic can get lost in the speciation procedure what was especially noticeable for the blood cells although also plasma/serum gave rise to some disappearance of arsenic. The latter losses may be attributed to precipitation of As(III)-containing proteins/peptides during the methanol/water extraction procedure whereas the former losses were due to loss of specific As(III)-complexing proteins/peptides (e.g. cysteine, metallothionein, reduced GSH, ferritin) on the column (Hamilton PRP-X100) during the separation procedure. Contemporary analytical protocols are not able to completely avoid artefacts due to losses from the sampling to the detection stage so that it is recommended to be careful with the explanation of results, particularly regarding metabolic and pharmacokinetic interpretations, and always aim to compare the sum of species with the total arsenic concentration determined independently
Numerical and analytical solutions for problems relevant for quantum computers
International Nuclear Information System (INIS)
Spoerl, Andreas
2008-01-01
Quantum computers are one of the next technological steps in modern computer science. Some of the relevant questions that arise when it comes to the implementation of quantum operations (as building blocks in a quantum algorithm) or the simulation of quantum systems are studied. Numerical results are gathered for variety of systems, e.g. NMR systems, Josephson junctions and others. To study quantum operations (e.g. the quantum fourier transform, swap operations or multiply-controlled NOT operations) on systems containing many qubits, a parallel C++ code was developed and optimised. In addition to performing high quality operations, a closer look was given to the minimal times required to implement certain quantum operations. These times represent an interesting quantity for the experimenter as well as for the mathematician. The former tries to fight dissipative effects with fast implementations, while the latter draws conclusions in the form of analytical solutions. Dissipative effects can even be included in the optimisation. The resulting solutions are relaxation and time optimised. For systems containing 3 linearly coupled spin (1)/(2) qubits, analytical solutions are known for several problems, e.g. indirect Ising couplings and trilinear operations. A further study was made to investigate whether there exists a sufficient set of criteria to identify systems with dynamics which are invertible under local operations. Finally, a full quantum algorithm to distinguish between two knots was implemented on a spin(1)/(2) system. All operations for this experiment were calculated analytically. The experimental results coincide with the theoretical expectations. (orig.)
Macro elemental analysis of food samples by nuclear analytical technique
Syahfitri, W. Y. N.; Kurniawati, S.; Adventini, N.; Damastuti, E.; Lestiani, D. D.
2017-06-01
Energy-dispersive X-ray fluorescence (EDXRF) spectrometry is a non-destructive, rapid, multi elemental, accurate, and environment friendly analysis compared with other detection methods. Thus, EDXRF spectrometry is applicable for food inspection. The macro elements calcium and potassium constitute important nutrients required by the human body for optimal physiological functions. Therefore, the determination of Ca and K content in various foods needs to be done. The aim of this work is to demonstrate the applicability of EDXRF for food analysis. The analytical performance of non-destructive EDXRF was compared with other analytical techniques; neutron activation analysis and atomic absorption spectrometry. Comparison of methods performed as cross checking results of the analysis and to overcome the limitations of the three methods. Analysis results showed that Ca found in food using EDXRF and AAS were not significantly different with p-value 0.9687, whereas p-value of K between EDXRF and NAA is 0.6575. The correlation between those results was also examined. The Pearson correlations for Ca and K were 0.9871 and 0.9558, respectively. Method validation using SRM NIST 1548a Typical Diet was also applied. The results showed good agreement between methods; therefore EDXRF method can be used as an alternative method for the determination of Ca and K in food samples.
Analytical vs. Simulation Solution Techniques for Pulse Problems in Non-linear Stochastic Dynamics
DEFF Research Database (Denmark)
Iwankiewicz, R.; Nielsen, Søren R. K.
Advantages and disadvantages of available analytical and simulation techniques for pulse problems in non-linear stochastic dynamics are discussed. First, random pulse problems, both those which do and do not lead to Markov theory, are presented. Next, the analytical and analytically-numerical tec......Advantages and disadvantages of available analytical and simulation techniques for pulse problems in non-linear stochastic dynamics are discussed. First, random pulse problems, both those which do and do not lead to Markov theory, are presented. Next, the analytical and analytically...
Analytical Method to Estimate the Complex Permittivity of Oil Samples
Directory of Open Access Journals (Sweden)
Lijuan Su
2018-03-01
Full Text Available In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR, which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.
Voltammetric technique, a panacea for analytical examination of environmental samples
International Nuclear Information System (INIS)
Zahir, E.; Mohiuddin, S.; Naqvi, I.I.
2012-01-01
Voltammetric methods for trace metal analysis in environmental samples of marine origin like mangrove, sediments and shrimps are generally recommended. Three different electro-analytical techniques i.e. polarography, anodic stripping voltammetry (ASV) and adsorptive stripping voltammetry (ADSV) have been used. Cd/sub 2/+, Pb/sub 2/+, Cu/sub 2/+ and Mn/sub 2/+ were determined through ASV, Cr/sub 6/+ was analyzed by ADSV and Fe/sub 2/+, Zn/sub 2/+, Ni/sub 2/+ and Co/sub 2/+ were determined through polarography. Out of which pairs of Fe/sub 2/+Zn/sub 2/+ and Ni/sub 2/+Co/sub 2/+ were determined in two separate runs while Cd/sub 2/+, Pb/sub 2/+, Cu/sub 2/+ were analyzed in single run of ASV. Sensitivity and speciation capabilities of voltammetric methods have been employed. Analysis conditions were optimized that includes selection of supporting electrolyte, pH, working electrodes, sweep rate etc. Stripping voltammetry was adopted for analysis at ultra trace levels. Statistical parameters for analytical method development like selectivity factor, interference, repeatability (0.0065-0.130 macro g/g), reproducibility (0.08125-1.625 macro g/g), detection limits (0.032-5.06 macro g/g), limits of quantification (0.081-12.652 macro g/g), sensitivities (5.636-2.15 nA mL macro g-1) etc. were also determined. The percentage recoveries were found in between 95-105% using certified reference materials. Real samples of complex marine environment from Karachi coastline were also analyzed. The standard addition method was employed where any matrix effect was evidenced. (author)
Analytical methodologies for the determination of benzodiazepines in biological samples.
Persona, Karolina; Madej, Katarzyna; Knihnicki, Paweł; Piekoszewski, Wojciech
2015-09-10
Benzodiazepine drugs belong to important and most widely used medicaments. They demonstrate such therapeutic properties as anxiolytic, sedative, somnifacient, anticonvulsant, diastolic and muscle relaxant effects. However, despite the fact that benzodiazepines possess high therapeutic index and are considered to be relatively safe, their use can be dangerous when: (1) co-administered with alcohol, (2) co-administered with other medicaments like sedatives, antidepressants, neuroleptics or morphine like substances, (3) driving under their influence, (4) using benzodiazepines non-therapeutically as drugs of abuse or in drug-facilitated crimes. For these reasons benzodiazepines are still studied and determined in a variety of biological materials. In this article, sample preparation techniques which have been applied in analysis of benzodiazepine drugs in biological samples have been reviewed and presented. The next part of the article is focused on a review of analytical methods which have been employed for pharmacological, toxicological or forensic study of this group of drugs in the biological matrices. The review was preceded by a description of the physicochemical properties of the selected benzodiazepines and two, very often coexisting in the same analyzed samples, sedative-hypnotic drugs. Copyright © 2015. Published by Elsevier B.V.
Evaluation of analytical results on DOE Quality Assessment Program Samples
International Nuclear Information System (INIS)
Jaquish, R.E.; Kinnison, R.R.; Mathur, S.P.; Sastry, R.
1985-01-01
Criteria were developed for evaluating the participants analytical results in the DOE Quality Assessment Program (QAP). Historical data from previous QAP studies were analyzed using descriptive statistical methods to determine the interlaboratory precision that had been attained. Performance criteria used in other similar programs were also reviewed. Using these data, precision values and control limits were recommended for each type of analysis performed in the QA program. Results of the analysis performed by the QAP participants on the November 1983 samples were statistically analyzed and evaluated. The Environmental Measurements Laboratory (EML) values were used as the known values and 3-sigma precision values were used as control limits. Results were submitted by 26 participating laboratories for 49 different radionuclide media combinations. The participants reported 419 results and of these, 350 or 84% were within control limits. Special attention was given to the data from gamma spectral analysis of air filters and water samples. both normal probability and box plots were prepared for each nuclide to help evaluate the distribution of the data. Results that were outside the expected range were identified and suggestions made that laboratories check calculations, and procedures on these results
Development of analytical techniques for water and environmental samples (2)
Energy Technology Data Exchange (ETDEWEB)
Eum, Chul Hun; Jeon, Chi Wan; Jung, Kang Sup; Song, Kyung Sun; Kim, Sang Yeon [Korea Institute of Geology Mining and Materials, Taejon (Korea)
1998-12-01
The purpose of this study is to develop new analytical methods with good detection limit for toxic inorganic and organic compounds. The analyses of CN, organic acids, particulate materials in environmental samples have been done using several methods such as Ion Chromatography, SPE, SPME, GC/MS, GC/FID, SPLITT (split-flow thin cell fractionation) during the second year of this project. Advantage and disadvantage of several distillation method (by KS, JIS, EPA) for CN analysis in wastewater were investigated. As the results, we proposed new distillation apparatus for CN analysis, which was proved to be simpler, faster and to get better recovery than conventional apparatus. And ion chromatograph/pulsed amperometric detector (IC/PAD) system instead of colorimetry for CN detection was setup to solve matrix interference. And SPE(solid phase extraction) and SPME (solid phase micro extraction) as liquid-solid extraction technique were applied to the analysis of phenols in wastewater. Optimum experimental conditions and factors influencing analytical results were determined. From these results, It could be concluded that C{sub 18} cartridge and polystyrene-divinylbenzene disk in SPE method, polyacrylate fiber in SPME were proper solid phase adsorbent for phenol. Optimum conditions to analyze phenol derivatives simultaneously were established. Also, Continuous SPLITT (Split-flow thin cell) Fractionation (CSF) is a new preparative separation technique that is useful for fractionation of particulate and macromolecular materials. CSF is carried out in a thin ribbon-like channel equipped with two splitters at both inlet and outlet of the channel. In this work, we set up a new CSF system, and tested using polystyrene latex standard particles. And then we fractionated particles contained in air and underground water based on their sedimentation coefficients using CSF. (author). 27 refs., 13 tabs., 31 figs.
Advanced Curation: Solving Current and Future Sample Return Problems
Fries, M.; Calaway, M.; Evans, C.; McCubbin, F.
2015-01-01
Advanced Curation is a wide-ranging and comprehensive research and development effort at NASA Johnson Space Center that identifies and remediates sample related issues. For current collections, Advanced Curation investigates new cleaning, verification, and analytical techniques to assess their suitability for improving curation processes. Specific needs are also assessed for future sample return missions. For each need, a written plan is drawn up to achieve the requirement. The plan draws while upon current Curation practices, input from Curators, the analytical expertise of the Astromaterials Research and Exploration Science (ARES) team, and suitable standards maintained by ISO, IEST, NIST and other institutions. Additionally, new technologies are adopted on the bases of need and availability. Implementation plans are tested using customized trial programs with statistically robust courses of measurement, and are iterated if necessary until an implementable protocol is established. Upcoming and potential NASA missions such as OSIRIS-REx, the Asteroid Retrieval Mission (ARM), sample return missions in the New Frontiers program, and Mars sample return (MSR) all feature new difficulties and specialized sample handling requirements. The Mars 2020 mission in particular poses a suite of challenges since the mission will cache martian samples for possible return to Earth. In anticipation of future MSR, the following problems are among those under investigation: What is the most efficient means to achieve the less than 1.0 ng/sq cm total organic carbon (TOC) cleanliness required for all sample handling hardware? How do we maintain and verify cleanliness at this level? The Mars 2020 Organic Contamination Panel (OCP) predicts that organic carbon, if present, will be present at the "one to tens" of ppb level in martian near-surface samples. The same samples will likely contain wt% perchlorate salts, or approximately 1,000,000x as much perchlorate oxidizer as organic carbon
Simple and Accurate Analytical Solutions of the Electrostatically Actuated Curled Beam Problem
Younis, Mohammad I.
2014-01-01
We present analytical solutions of the electrostatically actuated initially deformed cantilever beam problem. We use a continuous Euler-Bernoulli beam model combined with a single-mode Galerkin approximation. We derive simple analytical expressions
Analytical and Numerical Studies of Several Fluid Mechanical Problems
Kong, D. L.
2014-03-01
In this thesis, three parts, each with several chapters, are respectively devoted to hydrostatic, viscous, and inertial fluids theories and applications. Involved topics include planetary, biological fluid systems, and high performance computing technology. In the hydrostatics part, the classical Maclaurin spheroids theory is generalized, for the first time, to a more realistic multi-layer model, establishing geometries of both the outer surface and the interfaces. For one of its astrophysical applications, the theory explicitly predicts physical shapes of surface and core-mantle-boundary for layered terrestrial planets, which enables the studies of some gravity problems, and the direct numerical simulations of dynamo flows in rotating planetary cores. As another application of the figure theory, the zonal flow in the deep atmosphere of Jupiter is investigated for a better understanding of the Jovian gravity field. An upper bound of gravity field distortions, especially in higher-order zonal gravitational coefficients, induced by deep zonal winds is estimated firstly. The oblate spheroidal shape of an undistorted Jupiter resulting from its fast solid body rotation is fully taken into account, which marks the most significant improvement from previous approximation based Jovian wind theories. High viscosity flows, for example Stokes flows, occur in a lot of processes involving low-speed motions in fluids. Microorganism swimming is such a typical case. A fully three dimensional analytic solution of incompressible Stokes equation is derived in the exterior domain of an arbitrarily translating and rotating prolate spheroid, which models a large family of microorganisms such as cocci bacteria. The solution is then applied to the magnetotactic bacteria swimming problem, and good consistency has been found between theoretical predictions and laboratory observations of the moving patterns of such bacteria under magnetic fields. In the analysis of dynamics of planetary
Intercalibration of analytical methods on marine environmental samples
International Nuclear Information System (INIS)
1988-06-01
The pollution of the seas by various chemical substances constitutes nowadays one of the principal concerns of mankind. The International Atomic Energy Agency has organized in past years several intercomparison exercises in the framework of its Analytical Quality Control Service. The present intercomparison had a double aim: first, to give laboratories participating in this intercomparison an opportunity for checking their analytical performance. Secondly, to produce on the basis of the results of this intercomparison a reference material made of fish tissue which would be accurately certified with respect to many trace elements. Such a material could be used by analytical chemists to check the validity of new analytical procedures. In total, 53 laboratories from 29 countries reported results (585 laboratory means for 48 elements). 5 refs, 52 tabs
Hanford analytical sample projections FY 1998 - FY 2002
International Nuclear Information System (INIS)
Joyce, S.M.
1997-01-01
Sample projections are compiled for the Hanford site based on inputs from the major programs for the years 1998 through 2002. Sample projections are categorized by radiation level, protocol, sample matrix and Program. Analyses requirements are also presented
Directory of Open Access Journals (Sweden)
Hiroko Kudo
2017-04-01
Full Text Available Insufficient sensitivity is a general issue of colorimetric paper-based analytical devices (PADs for trace analyte detection, such as metal ions, in environmental water. This paper demonstrates the colorimetric detection of zinc ions (Zn2+ on a paper-based analytical device with an integrated analyte concentration system. Concentration of Zn2+ ions from an enlarged sample volume (1 mL has been achieved with the aid of a colorimetric Zn2+ indicator (Zincon electrostatically immobilized onto a filter paper substrate in combination with highly water-absorbent materials. Analyte concentration as well as sample pretreatment, including pH adjustment and interferent masking, has been elaborated. The resulting device enables colorimetric quantification of Zn2+ in environmental water samples (tap water, river water from a single sample application. The achieved detection limit of 0.53 μM is a significant improvement over that of a commercial colorimetric Zn2+ test paper (9.7 μM, demonstrating the efficiency of the developed analyte concentration system not requiring any equipment.
Analytical solutions in the two-cavity coupling problem
International Nuclear Information System (INIS)
Ayzatsky, N.I.
2000-01-01
Analytical solutions of precise equations that describe the rf-coupling of two cavities through a co-axial cylindrical hole are given for various limited cases.For their derivation we have used the method of solution of an infinite set of linear algebraic equations,based on its transformation into dual integral equations
Analytical solutions to orthotropic variable thickness disk problems
Directory of Open Access Journals (Sweden)
Ahmet N. ERASLAN
2016-02-01
Full Text Available An analytical model is developed to estimate the mechanical response of nonisothermal, orthotropic, variable thickness disks under a variety of boundary conditions. Combining basic mechanical equations of disk geometry with the equations of orthotropic material, the elastic equation of the disk is obtained. This equation is transformed into a standard hypergeometric differential equation by means of a suitable transformation. An analytical solution is then obtained in terms of hypergeometric functions. The boundary conditions used to complete the solutions simulate rotating annular disks with two free surfaces, stationary annular disks with pressurized inner and free outer surfaces, and free inner and pressurized outer surfaces. The results of the solutions to each of these cases are presented in graphical forms. It is observed that, for the three cases investigated the elastic orthotropy parameter turns out to be an important parameter affecting the elastic behaviorKeywords: Orthotropic disk, Variable thickness, Thermoelasticity, Hypergeometric equation
On numerical-analytic techniques for boundary value problems
Czech Academy of Sciences Publication Activity Database
Rontó, András; Rontó, M.; Shchobak, N.
2012-01-01
Roč. 12, č. 3 (2012), s. 5-10 ISSN 1335-8243 Institutional support: RVO:67985840 Keywords : numerical-analytic method * periodic successive approximations * Lyapunov-Schmidt method Subject RIV: BA - General Mathematics http://www.degruyter.com/view/j/aeei.2012.12.issue-3/v10198-012-0035-1/v10198-012-0035-1.xml?format=INT
Nanomaterials in consumer products: a challenging analytical problem
Directory of Open Access Journals (Sweden)
Catia eContado
2015-08-01
Full Text Available Many products used in everyday life are made with the assistance of nanotechnologies. Cosmetic, pharmaceuticals, sunscreen, powdered food are only few examples of end products containing nano-sized particles (NPs, generally added to improve the product quality. To evaluate correctly benefits versus risks of engineered nanomaterials and consequently to legislate in favor of consumer’s protection, it is necessary to know the hazards connected with the exposure levels. This information implies transversal studies and a number of different competences.On analytical point of view the identification, quantification and characterization of NPs in food matrices and in cosmetic or personal care products pose significant challenges, because NPs are usually present at low concentration levels and the matrices, in which they are dispersed, are complexes and often incompatible with analytical instruments that would be required for their detection and characterization.This paper focused on some analytical techniques suitable for the detection, characterization and quantification of NPs in food and cosmetics products, reports their recent application in characterizing specific metal and metal-oxide NPs in these two important industrial and market sectors.The need of a characterization of the NPs as much as possible complete, matching complementary information about different metrics, possible achieved through validate procedures, is what clearly emerges from this research. More work should be done to produce standardized materials and to set-up methodologies to determine number-based size distributions and to get quantitative date about the NPs in such a complex matrices.
Nanomaterials in consumer products: a challenging analytical problem.
Contado, Catia
2015-01-01
Many products used in everyday life are made with the assistance of nanotechnologies. Cosmetic, pharmaceuticals, sunscreen, powdered food are only few examples of end products containing nano-sized particles (NPs), generally added to improve the product quality. To evaluate correctly benefits vs. risks of engineered nanomaterials and consequently to legislate in favor of consumer's protection, it is necessary to know the hazards connected with the exposure levels. This information implies transversal studies and a number of different competences. On analytical point of view the identification, quantification and characterization of NPs in food matrices and in cosmetic or personal care products pose significant challenges, because NPs are usually present at low concentration levels and the matrices, in which they are dispersed, are complexes and often incompatible with analytical instruments that would be required for their detection and characterization. This paper focused on some analytical techniques suitable for the detection, characterization and quantification of NPs in food and cosmetics products, reports their recent application in characterizing specific metal and metal-oxide NPs in these two important industrial and market sectors. The need of a characterization of the NPs as much as possible complete, matching complementary information about different metrics, possible achieved through validate procedures, is what clearly emerges from this research. More work should be done to produce standardized materials and to set-up methodologies to determine number-based size distributions and to get quantitative date about the NPs in such a complex matrices.
Analytical results of Tank 38H core samples -- Fall 1999
International Nuclear Information System (INIS)
Swingle, R.F.
2000-01-01
Two samples were pulled from Tank 38H in the Fall of 1999: a variable depth sample (VDS) of the supernate was pulled in October and a core sample from the salt layer was pulled in December. Analysis of the rinse from the outside of the core sample indicated no sign of volatile or semivolatile organics. Both supernate and solids from the VDS and the dried core sample solids were analyzed for isotopes which could pose a criticality concern and also for elements which could serve as neutron poisons, as well as other elements. Results of the elemental analyses of these samples show significant elements present to mitigate the potential for nuclear criticality. However, it should be noted the results given for the VDS solids elemental analyses may be higher than the actual concentration in the solids, since the filter paper was dissolved along with the sample solids
Visual Attention Modulates Insight versus Analytic Solving of Verbal Problems
Wegbreit, Ezra; Suzuki, Satoru; Grabowecky, Marcia; Kounios, John; Beeman, Mark
2012-01-01
Behavioral and neuroimaging findings indicate that distinct cognitive and neural processes underlie solving problems with sudden insight. Moreover, people with less focused attention sometimes perform better on tests of insight and creative problem solving. However, it remains unclear whether different states of attention, within individuals,…
Analytical results from Tank 38H criticality Sample HTF-093
International Nuclear Information System (INIS)
Wilmarth, W.R.
2000-01-01
Resumption of processing in the 242-16H Evaporator could cause salt dissolution in the Waste Concentration Receipt Tank (Tank 38H). Therefore, High Level Waste personnel sampled the tank at the salt surface. Results of elemental analysis of the dried sludge solids from this sample (HTF-093) show significant quantities of neutron poisons (i.e., sodium, iron, and manganese) present to mitigate the potential for nuclear criticality. Comparison of this sample with the previous chemical and radiometric analyses of H-Area Evaporator samples show high poison to actinide ratios
PROBLEMS OF INFORMATION AND ANALYTICAL SUPPORT OF CONTEMPORARY STRATEGIC MANAGEMENT
Directory of Open Access Journals (Sweden)
M. A. Rodionov
2014-01-01
Full Text Available The problematic aspects have been considered with regard to the information and analytical support of a strategic decision making in the modern management. The role and place are clarified in relation to a process of elaboration and making a management decision in strategic planning. The existing approaches are analyzed regarding the estimating of regularities in the course and outcome of strategic processes. The strategic forecasting matters have been studied as well as a decision maker`s attitude to the risks.
A sample preparation method for recovering suppressed analyte ions in MALDI TOF MS
Lou, X.; Waal, de B.F.M.; Milroy, L.G.; Dongen, van J.L.J.
2015-01-01
In matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS), analyte signals can be substantially suppressed by other compounds in the sample. In this technical note, we describe a modified thin-layer sample preparation method that significantly reduces the analyte
Analytical techniques for measurement of 99Tc in environmental samples
International Nuclear Information System (INIS)
Anon.
1979-01-01
Three new methods have been developed for measuring 99 Tc in environmental samples. The most sensitive method is isotope dilution mass spectrometry, which allows measurement of about 1 x 10 -12 grams of 99 Tc. Results on analysis of five samples by this method compare very well with values obtained by a second independent method, which involves counting of beta particles from 99 Tc and internal conversion electrons from /sup 97m/Tc. A third method involving electrothermal atomic absorption has also been developed. Although this method is not as sensitive as the first two techniques, the cost per analysis is expected to be considerably less for certain types of samples
Reference Priors For Non-Normal Two-Sample Problems
Fernández, C.; Steel, M.F.J.
1997-01-01
The reference prior algorithm (Berger and Bernardo, 1992) is applied to locationscale models with any regular sampling density. A number of two-sample problems is analyzed in this general context, extending the dierence, ratio and product of Normal means problems outside Normality, while explicitly
Biased sampling, over-identified parameter problems and beyond
Qin, Jing
2017-01-01
This book is devoted to biased sampling problems (also called choice-based sampling in Econometrics parlance) and over-identified parameter estimation problems. Biased sampling problems appear in many areas of research, including Medicine, Epidemiology and Public Health, the Social Sciences and Economics. The book addresses a range of important topics, including case and control studies, causal inference, missing data problems, meta-analysis, renewal process and length biased sampling problems, capture and recapture problems, case cohort studies, exponential tilting genetic mixture models etc. The goal of this book is to make it easier for Ph. D students and new researchers to get started in this research area. It will be of interest to all those who work in the health, biological, social and physical sciences, as well as those who are interested in survey methodology and other areas of statistical science, among others. .
Analytic Approach to Resolving Parking Problems in Downtown Zagreb
Directory of Open Access Journals (Sweden)
Adolf Malić
2005-01-01
Full Text Available Parking issue is one of the major problems in Zagreb, andin relation to that Zagreb does not differ from other similar orbigger European cities. The problem the city is facing is beingpresented in the paper. It is complex and can be solved gradually,using operative and planning measures, by applying influentialparameters assessments based on which the appropriateparking-garage spaces assessment would be selected. Besides,all the knowledge learned from experiences of similar Europeancities should be used in resolving stationary traffic problem.Introduction of fast public urban transport would providepassengers with improved services (particularly in relation tothe travelling time introducing modern traffic system thatwould reduce the travelling time to below 30 minutes for the farthestrelations. Further improvement in reducing parking problemsin downtown as well as Zagreb broader area would not bepossible without t,nplementing th.s approach.
Analytical methods and problems for the diamides type of extractants
International Nuclear Information System (INIS)
Cuillerdier, C.; Nigond, L.; Musikas, C.; Vitart, H.; Hoel, P.
1989-01-01
Diamides of carboxylic acids and especially malonamides are able to extract alpha emitters (including trivalent ions such as Am and Cm) contained in the wastes solutions of the nuclear industry. As they are completely incinerable and easy to purify, they could be an alternative to the mixture CMPO-TBP which is used in the TRUEX process. A large oxyalkyl radical enhances the distribution coefficients of americium in nitric acid sufficiently to permit the decontamination of wastes solutions in a classical mixers-settlers battery. Now researches are pursued with the aim of optimizing the formula of extractant, the influence of the structure of the extractant on its basicity and stability under radiolysis and hydrolysis is investigated. Analytical methods (potentiometry and NMR of C 13 ) have been developed for solvent titration and to evaluate the percentage of degradation and to identify some of the degradation products
Local extremal problems for bounded analytic functions without zeros
International Nuclear Information System (INIS)
Prokhorov, D V; Romanova, S V
2006-01-01
In the class B(t), t>0, of all functions f(z,t)=e -t +c 1 (t)z+c 2 (t)z 2 +... that are analytic in the unit disc U and such that 0 0. We suggest an algorithm for determining those t>0 for which the canonical functions provide the local maximum of Re c n (t) in B(t). We describe the set of functionals Lf)=Σ k=0 n λ k c k for which the canonical functions provide the maximum of Re L(f) in B(t) for small and large values of t. The proofs are based on optimization methods for solutions of control systems of differential equations
Local extremal problems for bounded analytic functions without zeros
Prokhorov, D. V.; Romanova, S. V.
2006-08-01
In the class B(t), t>0, of all functions f(z,t)=e^{-t}+c_1(t)z+c_2(t)z^2+\\dots that are analytic in the unit disc U and such that 00. We suggest an algorithm for determining those t>0 for which the canonical functions provide the local maximum of \\operatorname{Re}c_n(t) in B(t). We describe the set of functionals L(f)=\\sum_{k=0}^n\\lambda_kc_k for which the canonical functions provide the maximum of \\operatorname{Re}L(f) in B(t) for small and large values of t. The proofs are based on optimization methods for solutions of control systems of differential equations.
Analytical Solutions to Non-linear Mechanical Oscillation Problems
DEFF Research Database (Denmark)
Kaliji, H. D.; Ghadimi, M.; Barari, Amin
2011-01-01
In this paper, the Max-Min Method is utilized for solving the nonlinear oscillation problems. The proposed approach is applied to three systems with complex nonlinear terms in their motion equations. By means of this method, the dynamic behavior of oscillation systems can be easily approximated u...
A new analytical solution to the diffusion problem: Fourier series ...
African Journals Online (AJOL)
This paper reviews briefly the origin of Fourier Series Method. The paper then gives a vivid description of how the method can be applied to solve a diffusion problem, subject to some boundary conditions. The result obtained is quite appealing as it can be used to solve similar examples of diffusion equations. JONAMP Vol.
Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech
2015-01-01
Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Post-Decontamination Vapor Sampling and Analytical Test Methods
2015-08-12
is decontaminated that could pose an exposure hazard to unprotected personnel. The chemical contaminants may include chemical warfare agents (CWAs... decontamination process. Chemical contaminants can include chemical warfare agents (CWAs) or their simulants, nontraditional agents (NTAs), toxic industrial...a range of test articles from coupons, panels, and small fielded equipment items. 15. SUBJECT TERMS Vapor hazard; vapor sampling; chemical warfare
An analytic approach to resolving problems in medical ethics.
Candee, D; Puka, B
1984-01-01
Education in ethics among practising professionals should provide a systematic procedure for resolving moral problems. A method for such decision-making is outlined using the two classical orientations in moral philosophy, teleology and deontology. Teleological views such as utilitarianism resolve moral dilemmas by calculating the excess of good over harm expected to be produced by each feasible alternative for action. The deontological view focuses on rights, duties, and principles of justic...
International Nuclear Information System (INIS)
Flament, T.; Goasmat, F.; Poilane, F.
2002-01-01
Managing the operation of large commercial spent nuclear fuel reprocessing plants, such as UP3 and UP2-800 in La Hague, France, requires an extensive analytical program and the shortest possible analysis response times. COGEMA, together with its engineering subsidiary SGN, decided to build high-performance laboratories to support operations in its plants. These laboratories feature automated equipment, safe environments for operators, and short response times, all in centralized installations. Implementation of a computerized analytical data management system and a fully automated pneumatic system for the transfer of radioactive samples was a key factor contributing to the successful operation of the laboratories and plants
Arsenolipids in marine samples – Status and analytical challenges
DEFF Research Database (Denmark)
Sele, Veronika; Amlund, Heidi; Sloth, Jens Jørgen
2014-01-01
Arsenic is an ubiquitous element that is present in the environment due to natural and anthropogenic processes. Marine samples are generally more concentrated in arsenic than terrestrial samples, with concentrations typically in the range of 1 to 100 mg kg-1. Arsenic has a complex chemistry and up...... to 100 naturally occurring arsenic species have so far been identified, both water-soluble and lipid-soluble compounds. Most research on arsenic and its chemical forms has so far focused on the water-soluble species, and a large set of data on occurrence and species exist. During the last decade...... of arsenolipids, where organic solvents are required for the separation of species, the ICP-MS needs to be modified by addition of oxygen and use of low solvent flow. A modified ICP-MS set-up for analysis of intact arsenolipids was first applied in 20051. Since then, around 40 intact arsenolipids have been...
Hanford analytical sample projections FY 1995--FY 2000. Revision 1
Energy Technology Data Exchange (ETDEWEB)
Simmons, F.M.
1994-12-02
Sample projections have been categorized into 7 major areas: Environmental Restoration, Tank Waste Remediation, Solid Waste, Liquid Effluents, Site Monitoring, Industrial Hygiene, and General Process Support Programs. The estimates are through the Fiscal Year 2000 and are categorized by radiation level. The yearly sample projection for each program will be categorized as follows: Category 1: Non-Radioactive; Category 2: <1 mR/hr {beta}/{gamma}; <10 nCi/g {alpha}; Category 3: 1 mR/hr {beta}/{gamma} to <10 mR/hr {beta}/{gamma}; and <10 nCi/g {alpha}; Category 4: <10 mR/hr {beta}/{gamma}; and <200 nCi/g {alpha}; Category 5: 10 mR/hr {beta}/{gamma} to <100 mR/hr {beta}/{gamma}; and <200 nCi/g {alpha}; Category 6: >100 mR/hr {beta}/{gamma}; and Category 7: >200 nCi/g {alpha}.
Dissolution of nuclear fuel samples for analytical purposes. I
International Nuclear Information System (INIS)
Krtil, J.
1983-01-01
Main attention is devoted to procedures for dissolving fuels based on uranium metal and its alloys, uranium oxides and carbides, plutonium metal, plutonium dioxide, plutonium carbides, mixed PuC-UC carbides and mixed oxides (PuU)O 2 . Data from the literature and experience gained with the dissolution of nuclear fuel samples at the Central Control Laboratory of the Nuclear Research Institute at Rez are given. (B.S.)
Analytical Lie-algebraic solution of a 3D sound propagation problem in the ocean
Energy Technology Data Exchange (ETDEWEB)
Petrov, P.S., E-mail: petrov@poi.dvo.ru [Il' ichev Pacific Oceanological Institute, 43 Baltiyskaya str., Vladivostok, 690041 (Russian Federation); Prants, S.V., E-mail: prants@poi.dvo.ru [Il' ichev Pacific Oceanological Institute, 43 Baltiyskaya str., Vladivostok, 690041 (Russian Federation); Petrova, T.N., E-mail: petrova.tn@dvfu.ru [Far Eastern Federal University, 8 Sukhanova str., 690950, Vladivostok (Russian Federation)
2017-06-21
The problem of sound propagation in a shallow sea with variable bottom slope is considered. The sound pressure field produced by a time-harmonic point source in such inhomogeneous 3D waveguide is expressed in the form of a modal expansion. The expansion coefficients are computed using the adiabatic mode parabolic equation theory. The mode parabolic equations are solved explicitly, and the analytical expressions for the modal coefficients are obtained using a Lie-algebraic technique. - Highlights: • A group-theoretical approach is applied to a problem of sound propagation in a shallow sea with variable bottom slope. • An analytical solution of this problem is obtained in the form of modal expansion with analytical expressions of the coefficients. • Our result is the only analytical solution of the 3D sound propagation problem with no translational invariance. • This solution can be used for the validation of the numerical propagation models.
An analytic approach to resolving problems in medical ethics.
Candee, D; Puka, B
1984-06-01
Education in ethics among practising professionals should provide a systematic procedure for resolving moral problems. A method for such decision-making is outlined using the two classical orientations in moral philosophy, teleology and deontology. Teleological views such as utilitarianism resolve moral dilemmas by calculating the excess of good over harm expected to be produced by each feasible alternative for action. The deontological view focuses on rights, duties, and principles of justice. Both methods are used to resolve the 1971 Johns Hopkins case of a baby born with Down's syndrome and duodenal atresia.
An analytic approach to resolving problems in medical ethics.
Candee, D; Puka, B
1984-01-01
Education in ethics among practising professionals should provide a systematic procedure for resolving moral problems. A method for such decision-making is outlined using the two classical orientations in moral philosophy, teleology and deontology. Teleological views such as utilitarianism resolve moral dilemmas by calculating the excess of good over harm expected to be produced by each feasible alternative for action. The deontological view focuses on rights, duties, and principles of justice. Both methods are used to resolve the 1971 Johns Hopkins case of a baby born with Down's syndrome and duodenal atresia. PMID:6234395
Determination of palladium in biological samples applying nuclear analytical techniques
International Nuclear Information System (INIS)
Cavalcante, Cassio Q.; Sato, Ivone M.; Salvador, Vera L. R.; Saiki, Mitiko
2008-01-01
This study presents Pd determinations in bovine tissue samples containing palladium prepared in the laboratory, and CCQM-P63 automotive catalyst materials of the Proficiency Test, using instrumental thermal and epithermal neutron activation analysis and energy dispersive X-ray fluorescence techniques. Solvent extraction and solid phase extraction procedures were also applied to separate Pd from interfering elements before the irradiation in the nuclear reactor. The results obtained by different techniques were compared against each other to examine sensitivity, precision and accuracy. (author)
Non-linear analytic and coanalytic problems (Lp-theory, Clifford analysis, examples)
International Nuclear Information System (INIS)
Dubinskii, Yu A; Osipenko, A S
2000-01-01
Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the 'orthogonal' sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented
Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)
Dubinskii, Yu A.; Osipenko, A. S.
2000-02-01
Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.
Analytical methodologies for aluminium speciation in environmental and biological samples--a review.
Bi, S P; Yang, X D; Zhang, F P; Wang, X L; Zou, G W
2001-08-01
It is recognized that aluminium (Al) is a potential environmental hazard. Acidic deposition has been linked to increased Al concentrations in natural waters. Elevated levels of Al might have serious consequences for biological communities. Of particular interest is the speciation of Al in aquatic environments, because Al toxicity depends on its forms and concentrations. In this paper, advances in analytical methodologies for Al speciation in environmental and biological samples during the past five years are reviewed. Concerns about the specific problems of Al speciation and highlights of some important methods are elucidated in sections devoted to hybrid techniques (HPLC or FPLC coupled with ET-AAS, ICP-AES, or ICP-MS), flow-injection analysis (FIA), nuclear magnetic resonance (27Al NMR), electrochemical analysis, and computer simulation. More than 130 references are cited.
Problems involved in sampling within and outside zones of emission
Energy Technology Data Exchange (ETDEWEB)
Oelschlaeger, W
1973-01-01
Problems involved in the sampling of plant materials both inside and outside emission zones are considered, especially in regard to trace element analysis. The basic problem revolves around obtaining as accurately as possible an average sample of actual composition. Elimination of error possibilities requires a knowledge of such factors as botanical composition, vegetation states, rains, mass losses in leaf and blossom parts, contamination through the soil, and gaseous or particulate emissions. Sampling and preparation of samples is also considered with respect to quantitative aspects of trace element analysis.
A direct sampling method to an inverse medium scattering problem
Ito, Kazufumi; Jin, Bangti; Zou, Jun
2012-01-01
In this work we present a novel sampling method for time harmonic inverse medium scattering problems. It provides a simple tool to directly estimate the shape of the unknown scatterers (inhomogeneous media), and it is applicable even when
International Nuclear Information System (INIS)
Lichiang; James, B.D.; Magee, R.J.
1991-01-01
This review covers electro analytical methods reported in the literature for the determination of zinc, cadmium, selenium and arsenic in environmental and biological samples. A comprehensive survey of electro analytical techniques used for the determination of four important elements, i.e. zinc, cadmium, selenium and arsenic is reported herein with 322 references up to 1990. (Orig./A.B.)
Problem and Pathological Gambling in a Sample of Casino Patrons
Fong, Timothy W.; Campos, Michael D.; Brecht, Mary-Lynn; Davis, Alice; Marco, Adrienne; Pecanha, Viviane; Rosenthal, Richard J.
2010-01-01
Relatively few studies have examined gambling problems among individuals in a casino setting. The current study sought to examine the prevalence of gambling problems among a sample of casino patrons and examine alcohol and tobacco use, health status, and quality of life by gambling problem status. To these ends, 176 casino patrons were recruited by going to a Southern California casino and requesting that they complete an anonymous survey. Results indicated the following lifetime rates for at...
DEFF Research Database (Denmark)
Barari, Amin; Ganjavi, B.; Jeloudar, M. Ghanbari
2010-01-01
and fluid mechanics. Design/methodology/approach – Two new but powerful analytical methods, namely, He's VIM and HPM, are introduced to solve some boundary value problems in structural engineering and fluid mechanics. Findings – Analytical solutions often fit under classical perturbation methods. However......, as with other analytical techniques, certain limitations restrict the wide application of perturbation methods, most important of which is the dependence of these methods on the existence of a small parameter in the equation. Disappointingly, the majority of nonlinear problems have no small parameter at all......Purpose – In the last two decades with the rapid development of nonlinear science, there has appeared ever-increasing interest of scientists and engineers in the analytical techniques for nonlinear problems. This paper considers linear and nonlinear systems that are not only regarded as general...
Analytical derivation: An epistemic game for solving mathematically based physics problems
Bajracharya, Rabindra R.; Thompson, John R.
2016-06-01
Problem solving, which often involves multiple steps, is an integral part of physics learning and teaching. Using the perspective of the epistemic game, we documented a specific game that is commonly pursued by students while solving mathematically based physics problems: the analytical derivation game. This game involves deriving an equation through symbolic manipulations and routine mathematical operations, usually without any physical interpretation of the processes. This game often creates cognitive obstacles in students, preventing them from using alternative resources or better approaches during problem solving. We conducted hour-long, semi-structured, individual interviews with fourteen introductory physics students. Students were asked to solve four "pseudophysics" problems containing algebraic and graphical representations. The problems required the application of the fundamental theorem of calculus (FTC), which is one of the most frequently used mathematical concepts in physics problem solving. We show that the analytical derivation game is necessary, but not sufficient, to solve mathematically based physics problems, specifically those involving graphical representations.
MoonDB — A Data System for Analytical Data of Lunar Samples
Lehnert, K.; Ji, P.; Cai, M.; Evans, C.; Zeigler, R.
2018-04-01
MoonDB is a data system that makes analytical data from the Apollo lunar sample collection and lunar meteorites accessible by synthesizing published and unpublished datasets in a relational database with an online search interface.
International Nuclear Information System (INIS)
Hobbs, D.T.
1996-01-01
Recent processing of dilute solutions through the 2H-Evaporator system caused dissolution of salt in Tank 38H, the concentrate receipt tank. This report documents analytical results for samples taken from this evaporator system
Broman, Karolina; Bernholt, Sascha; Parchmann, Ilka
2015-05-01
Background:Context-based learning approaches are used to enhance students' interest in, and knowledge about, science. According to different empirical studies, students' interest is improved by applying these more non-conventional approaches, while effects on learning outcomes are less coherent. Hence, further insights are needed into the structure of context-based problems in comparison to traditional problems, and into students' problem-solving strategies. Therefore, a suitable framework is necessary, both for the analysis of tasks and strategies. Purpose:The aim of this paper is to explore traditional and context-based tasks as well as students' responses to exemplary tasks to identify a suitable framework for future design and analyses of context-based problems. The paper discusses different established frameworks and applies the Higher-Order Cognitive Skills/Lower-Order Cognitive Skills (HOCS/LOCS) taxonomy and the Model of Hierarchical Complexity in Chemistry (MHC-C) to analyse traditional tasks and students' responses. Sample:Upper secondary students (n=236) at the Natural Science Programme, i.e. possible future scientists, are investigated to explore learning outcomes when they solve chemistry tasks, both more conventional as well as context-based chemistry problems. Design and methods:A typical chemistry examination test has been analysed, first the test items in themselves (n=36), and thereafter 236 students' responses to one representative context-based problem. Content analysis using HOCS/LOCS and MHC-C frameworks has been applied to analyse both quantitative and qualitative data, allowing us to describe different problem-solving strategies. Results:The empirical results show that both frameworks are suitable to identify students' strategies, mainly focusing on recall of memorized facts when solving chemistry test items. Almost all test items were also assessing lower order thinking. The combination of frameworks with the chemistry syllabus has been
Energy Technology Data Exchange (ETDEWEB)
West, O.R.; Bayne, C.K.; Siegrist, R.L.; Holden, W.L.; Scarborough, S.S. [Oak Ridge National Lab., TN (United States); Bottrell, D.W. [USDOE, Washington, DC (United States)
1996-10-01
This study was undertaken to examine the hypothesis that prevalent and priority purgeable VOCs in properly preserved water samples are stable for at least 28 days. (VOCs are considered stable if concentrations do not change by more than 10%.) Surface water was spiked with 44 purgeable VOCs. Results showed that the measurement of 35 out of 44 purgeable VOCs in properly preserved water samples (4 C, 250 mg NaHSO{sub 4}, no headspace in 40 mL VOC vials with 0.010-in. Teflon-lined silicone septum caps) will not be affected by sample storage for 28 days. Larger changes (>10%) and low practical reporting times were observed for a few analytes, e.g. acrolein, CS{sub 2}, vinyl acetate, etc.; these also involve other analytical problems. Advantages of a 28-day (compared to 14-day) holding time are pointed out.
Determination of 237Np in environmental and nuclear samples: A review of the analytical method
International Nuclear Information System (INIS)
Thakur, P.; Mulholland, G.P.
2012-01-01
A number of analytical methods has been developed and used for the determination of neptunium in environmental and nuclear fuel samples using alpha, ICP–MS spectrometry, and other analytical techniques. This review summarizes and discusses development of the radiochemical procedures for separation of neptunium (Np), since the beginning of the nuclear industry, followed by a more detailed discussion on recent trends in the separation of neptunium. This article also highlights the progress in analytical methods and issues associated with the determination of neptunium in environmental samples. - Highlights: ► Determination of Np in environmental and nuclear samples is reviewed. ► Various analytical methods used for the determination of Np are listed. ► Progress and issues associated with the determination of Np are discussed.
Van Eeckhaut, Ann; Mangelings, Debby
2015-09-10
Peptide-based biopharmaceuticals represent one of the fastest growing classes of new drug molecules. New reaction types included in the synthesis strategies to reduce the rapid metabolism of peptides, along with the availability of new formulation and delivery technologies, resulted in an increased marketing of peptide drug products. In this regard, the development of analytical methods for quantification of peptides in pharmaceutical and biological samples is of utmost importance. From the sample preparation step to their analysis by means of chromatographic or electrophoretic methods, many difficulties should be tackled to analyze them. Recent developments in analytical techniques emphasize more and more on the use of green analytical techniques. This review will discuss the progresses in and challenges observed during green analytical method development for the quantification of peptides in pharmaceutical and biological samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Xiao-Ying Qin
2014-01-01
Full Text Available An Adomian decomposition method (ADM is applied to solve a two-phase Stefan problem that describes the pure metal solidification process. In contrast to traditional analytical methods, ADM avoids complex mathematical derivations and does not require coordinate transformation for elimination of the unknown moving boundary. Based on polynomial approximations for some known and unknown boundary functions, approximate analytic solutions for the model with undetermined coefficients are obtained using ADM. Substitution of these expressions into other equations and boundary conditions of the model generates some function identities with the undetermined coefficients. By determining these coefficients, approximate analytic solutions for the model are obtained. A concrete example of the solution shows that this method can easily be implemented in MATLAB and has a fast convergence rate. This is an efficient method for finding approximate analytic solutions for the Stefan and the inverse Stefan problems.
International Nuclear Information System (INIS)
Antler, Margaret; Ying Hai; Burns, David H.; Salin, Eric D.
2003-01-01
A sample diagnosis procedure that uses both non-analyte and analyte signals to estimate matrix effects in inductively coupled plasma-mass spectrometry is presented. Non-analyte signals are those of background species in the plasma (e.g. N + , ArO + ), and changes in these signals can indicate changes in plasma conditions. Matrix effects of Al, Ba, Cs, K and Na on 19 non-analyte signals and 15 element signals were monitored. Multiple linear regression was used to build the prediction models, using a genetic algorithm for objective feature selection. Non-analyte elemental signals and non-analyte signals were compared for diagnosing matrix effects, and both were found to be suitable for estimating matrix effects. Individual analyte matrix effect estimation was compared with the overall matrix effect prediction, and models used to diagnose overall matrix effects were more accurate than individual analyte models. In previous work [Spectrochim. Acta Part B 57 (2002) 277], we tested models for analytical decision making. The current models were tested in the same way, and were able to successfully diagnose matrix effects with at least an 80% success rate
Mujiasih; Waluya, S. B.; Kartono; Mariani
2018-03-01
Skills in working on the geometry problems great needs of the competence of Geometric Reasoning. As a teacher candidate, State Islamic University (UIN) students need to have the competence of this Geometric Reasoning. When the geometric reasoning in solving of geometry problems has grown well, it is expected the students are able to write their ideas to be communicative for the reader. The ability of a student's mathematical communication is supposed to be used as a marker of the growth of their Geometric Reasoning. Thus, the search for the growth of geometric reasoning in solving of analytic geometry problems will be characterized by the growth of mathematical communication abilities whose work is complete, correct and sequential, especially in writing. Preceded with qualitative research, this article was the result of a study that explores the problem: Was the search for the growth of geometric reasoning in solving analytic geometry problems could be characterized by the growth of mathematical communication abilities? The main activities in this research were done through a series of activities: (1) Lecturer trains the students to work on analytic geometry problems that were not routine and algorithmic process but many problems that the process requires high reasoning and divergent/open ended. (2) Students were asked to do the problems independently, in detail, complete, order, and correct. (3) Student answers were then corrected each its stage. (4) Then taken 6 students as the subject of this research. (5) Research subjects were interviewed and researchers conducted triangulation. The results of this research, (1) Mathematics Education student of UIN Semarang, had adequate the mathematical communication ability, (2) the ability of this mathematical communication, could be a marker of the geometric reasoning in solving of problems, and (3) the geometric reasoning of UIN students had grown in a category that tends to be good.
Big Data Analytics as Input for Problem Definition and Idea Generation in Technological Design
Escandón-Quintanilla , Ma-Lorena; Gardoni , Mickaël; Cohendet , Patrick
2016-01-01
Part 10: Big Data Analytics and Business Intelligence; International audience; Big data analytics enables organizations to process massive amounts of data in shorter amounts of time and with more understanding than ever before. Many uses have been found to take advantage of this tools and techniques, especially for decision making. However, little applications have been found in the first stages of innovation, namely problem definition and idea generation. This paper discusses how big data an...
Simulating quantum correlations as a distributed sampling problem
International Nuclear Information System (INIS)
Degorre, Julien; Laplante, Sophie; Roland, Jeremie
2005-01-01
It is known that quantum correlations exhibited by a maximally entangled qubit pair can be simulated with the help of shared randomness, supplemented with additional resources, such as communication, postselection or nonlocal boxes. For instance, in the case of projective measurements, it is possible to solve this problem with protocols using one bit of communication or making one use of a nonlocal box. We show that this problem reduces to a distributed sampling problem. We give a new method to obtain samples from a biased distribution, starting with shared random variables following a uniform distribution, and use it to build distributed sampling protocols. This approach allows us to derive, in a simpler and unified way, many existing protocols for projective measurements, and extend them to positive operator value measurements. Moreover, this approach naturally leads to a local hidden variable model for Werner states
Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason
2016-12-15
With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM³ ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs.
Directory of Open Access Journals (Sweden)
Yan Li
2016-12-01
Full Text Available With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM3 ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity and the degree of Socio-Economic Deprivation (SED at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs.
Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason
2016-01-01
With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM3 ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs. PMID:27983713
The limited relevance of analytical ethics to the problems of bioethics.
Holmes, R L
1990-04-01
Philosophical ethics comprises metaethics, normative ethics and applied ethics. These have characteristically received analytic treatment by twentieth-century Anglo-American philosophy. But there has been disagreement over their interrelationship to one another and the relationship of analytical ethics to substantive morality--the making of moral judgments. I contend that the expertise philosophers have in either theoretical or applied ethics does not equip them to make sounder moral judgments on the problems of bioethics than nonphilosophers. One cannot "apply" theories like Kantianism or consequentialism to get solutions to practical moral problems unless one knows which theory is correct, and that is a metaethical question over which there is no consensus. On the other hand, to presume to be able to reach solutions through neutral analysis of problems is unavoidably to beg controversial theoretical issues in the process. Thus, while analytical ethics can play an important clarificatory role in bioethics, it can neither provide, nor substitute for, moral wisdom.
Wuethrich, Alain; Haddad, Paul R; Quirino, Joselito P
2015-04-21
A sample preparation device for the simultaneous enrichment and separation of cationic and anionic analytes was designed and implemented in an eight-channel configuration. The device is based on the use of an electric field to transfer the analytes from a large volume of sample into small volumes of electrolyte that was suspended into two glass micropipettes using a conductive hydrogel. This simple, economical, fast, and green (no organic solvent required) sample preparation scheme was evaluated using cationic and anionic herbicides as test analytes in water. The analytical figures of merit and ecological aspects were evaluated against the state-of-the-art sample preparation, solid-phase extraction. A drastic reduction in both sample preparation time (94% faster) and resources (99% less consumables used) was observed. Finally, the technique in combination with high-performance liquid chromatography and capillary electrophoresis was applied to analysis of quaternary ammonium and phenoxypropionic acid herbicides in fortified river water as well as drinking water (at levels relevant to Australian guidelines). The presented sustainable sample preparation approach could easily be applied to other charged analytes or adopted by other laboratories.
Lluveras-Tenorio, Anna; Mazurek, Joy; Restivo, Annalaura; Colombini, Maria Perla; Bonaduce, Ilaria
2012-10-10
Saccharide materials have been used for centuries as binding media, to paint, write and illuminate manuscripts and to apply metallic leaf decorations. Although the technical literature often reports on the use of plant gums as binders, actually several other saccharide materials can be encountered in paint samples, not only as major binders, but also as additives. In the literature, there are a variety of analytical procedures that utilize GC-MS to characterize saccharide materials in paint samples, however the chromatographic profiles are often extremely different and it is impossible to compare them and reliably identify the paint binder. This paper presents a comparison between two different analytical procedures based on GC-MS for the analysis of saccharide materials in works-of-art. The research presented here evaluates the influence of the analytical procedure used, and how it impacts the sugar profiles obtained from the analysis of paint samples that contain saccharide materials. The procedures have been developed, optimised and systematically used to characterise plant gums at the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). The main steps of the analytical procedures and their optimisation are discussed. The results presented highlight that the two methods give comparable sugar profiles, whether the samples analysed are simple raw materials, pigmented and unpigmented paint replicas, or paint samples collected from hundreds of centuries old polychrome art objects. A common database of sugar profiles of reference materials commonly found in paint samples was thus compiled. The database presents data also from those materials that only contain a minor saccharide fraction. This database highlights how many sources of saccharides can be found in a paint sample, representing an important step forward in the problem of identifying polysaccharide binders in
Problem and pathological gambling in a sample of casino patrons.
Fong, Timothy W; Campos, Michael D; Brecht, Mary-Lynn; Davis, Alice; Marco, Adrienne; Pecanha, Viviane; Rosenthal, Richard J
2011-03-01
Relatively few studies have examined gambling problems among individuals in a casino setting. The current study sought to examine the prevalence of gambling problems among a sample of casino patrons and examine alcohol and tobacco use, health status, and quality of life by gambling problem status. To these ends, 176 casino patrons were recruited by going to a Southern California casino and requesting that they complete an anonymous survey. Results indicated the following lifetime rates for at-risk, problem, and pathological gambling: 29.2, 10.7, and 29.8%. Differences were found with regards to gambling behavior, and results indicated higher rates of smoking among individuals with gambling problems, but not higher rates of alcohol use. Self-rated quality of life was lower among pathological gamblers relative to non-problem gamblers, but did not differ from at-risk or problem gamblers. Although subject to some limitations, our data support the notion of higher frequency of gambling problems among casino patrons and may suggest the need for increased interventions for gambling problems on-site at casinos.
40 CFR 90.421 - Dilute gaseous exhaust sampling and analytical system description.
2010-07-01
... gas mixture temperature, measured at a point immediately ahead of the critical flow venturi, must be... analytical system description. (a) General. The exhaust gas sampling system described in this section is... requirements are as follows: (1) This sampling system requires the use of a Positive Displacement Pump—Constant...
Analytical Methodology for the Determination of Radium Isotopes in Environmental Samples
International Nuclear Information System (INIS)
2010-01-01
Reliable, comparable and 'fit for purpose' results are an essential requirement for any decision based on analytical measurements. For the analyst, the availability of tested and validated analytical procedures is an extremely important tool for production of such analytical measurements. For maximum utility, such procedures should be comprehensive, clearly formulated, and readily available to both the analyst and the customer for reference. Since 2004, the environment programme of the IAEA has included activities aimed at the development of a set of procedures for the determination of radionuclides in terrestrial environmental samples. Measurements of radium isotopes are important for radiological and environmental protection, geochemical and geochronological investigations, hydrology, etc. The suite of isotopes creates and stimulates continuing interest in the development of new methods for determination of radium in various media. In this publication, the four most routinely used analytical methods for radium determination in biological and environmental samples, i.e. alpha spectrometry, gamma spectrometry, liquid scintillation spectrometry and mass spectrometry, are reviewed
On the Use of Importance Sampling in Particle Transport Problems
International Nuclear Information System (INIS)
Eriksson, B.
1965-06-01
The idea of importance sampling is applied to the problem of solving integral equations of Fredholm's type. Especially Bolzmann's neutron transport equation is taken into consideration. For the solution of the latter equation, an importance sampling technique is derived from some simple transformations at the original transport equation into a similar equation. Examples of transformations are given, which have been used with great success in practice
Multi-frequency direct sampling method in inverse scattering problem
Kang, Sangwoo; Lambert, Marc; Park, Won-Kwang
2017-10-01
We consider the direct sampling method (DSM) for the two-dimensional inverse scattering problem. Although DSM is fast, stable, and effective, some phenomena remain unexplained by the existing results. We show that the imaging function of the direct sampling method can be expressed by a Bessel function of order zero. We also clarify the previously unexplained imaging phenomena and suggest multi-frequency DSM to overcome traditional DSM. Our method is evaluated in simulation studies using both single and multiple frequencies.
On the Use of Importance Sampling in Particle Transport Problems
Energy Technology Data Exchange (ETDEWEB)
Eriksson, B
1965-06-15
The idea of importance sampling is applied to the problem of solving integral equations of Fredholm's type. Especially Bolzmann's neutron transport equation is taken into consideration. For the solution of the latter equation, an importance sampling technique is derived from some simple transformations at the original transport equation into a similar equation. Examples of transformations are given, which have been used with great success in practice.
Means of introducing an analyte into liquid sampling atmospheric pressure glow discharge
Marcus, R. Kenneth; Quarles, Jr., Charles Derrick; Russo, Richard E.; Koppenaal, David W.; Barinaga, Charles J.; Carado, Anthony J.
2017-01-03
A liquid sampling, atmospheric pressure, glow discharge (LS-APGD) device as well as systems that incorporate the device and methods for using the device and systems are described. The LS-APGD includes a hollow capillary for delivering an electrolyte solution to a glow discharge space. The device also includes a counter electrode in the form of a second hollow capillary that can deliver the analyte into the glow discharge space. A voltage across the electrolyte solution and the counter electrode creates the microplasma within the glow discharge space that interacts with the analyte to move it to a higher energy state (vaporization, excitation, and/or ionization of the analyte).
Directory of Open Access Journals (Sweden)
Gabriele Anton
Full Text Available Advances in the "omics" field bring about the need for a high number of good quality samples. Many omics studies take advantage of biobanked samples to meet this need. Most of the laboratory errors occur in the pre-analytical phase. Therefore evidence-based standard operating procedures for the pre-analytical phase as well as markers to distinguish between 'good' and 'bad' quality samples taking into account the desired downstream analysis are urgently needed. We studied concentration changes of metabolites in serum samples due to pre-storage handling conditions as well as due to repeated freeze-thaw cycles. We collected fasting serum samples and subjected aliquots to up to four freeze-thaw cycles and to pre-storage handling delays of 12, 24 and 36 hours at room temperature (RT and on wet and dry ice. For each treated aliquot, we quantified 127 metabolites through a targeted metabolomics approach. We found a clear signature of degradation in samples kept at RT. Storage on wet ice led to less pronounced concentration changes. 24 metabolites showed significant concentration changes at RT. In 22 of these, changes were already visible after only 12 hours of storage delay. Especially pronounced were increases in lysophosphatidylcholines and decreases in phosphatidylcholines. We showed that the ratio between the concentrations of these molecule classes could serve as a measure to distinguish between 'good' and 'bad' quality samples in our study. In contrast, we found quite stable metabolite concentrations during up to four freeze-thaw cycles. We concluded that pre-analytical RT handling of serum samples should be strictly avoided and serum samples should always be handled on wet ice or in cooling devices after centrifugation. Moreover, serum samples should be frozen at or below -80°C as soon as possible after centrifugation.
Solving probabilistic inverse problems rapidly with prior samples
Käufl, Paul; Valentine, Andrew P.; de Wit, Ralph W.; Trampert, Jeannot
2016-01-01
Owing to the increasing availability of computational resources, in recent years the probabilistic solution of non-linear, geophysical inverse problems by means of sampling methods has become increasingly feasible. Nevertheless, we still face situations in which a Monte Carlo approach is not
Identification of clinical biomarkers for pre-analytical quality control of blood samples.
Kang, Hyun Ju; Jeon, Soon Young; Park, Jae-Sun; Yun, Ji Young; Kil, Han Na; Hong, Won Kyung; Lee, Mee-Hee; Kim, Jun-Woo; Jeon, Jae-Pil; Han, Bok Ghee
2013-04-01
Pre-analytical conditions are key factors in maintaining the high quality of biospecimens. They are necessary for accurate reproducibility of experiments in the field of biomarker discovery as well as achieving optimal specificity of laboratory tests for clinical diagnosis. In research at the National Biobank of Korea, we evaluated the impact of pre-analytical conditions on the stability of biobanked blood samples by measuring biochemical analytes commonly used in clinical laboratory tests. We measured 10 routine laboratory analytes in serum and plasma samples from healthy donors (n = 50) with a chemistry autoanalyzer (Hitachi 7600-110). The analyte measurements were made at different time courses based on delay of blood fractionation, freezing delay of fractionated serum and plasma samples, and at different cycles (0, 1, 3, 6, 9) of freeze-thawing. Statistically significant changes from the reference sample mean were determined using the repeated-measures ANOVA and the significant change limit (SCL). The serum levels of GGT and LDH were changed significantly depending on both the time interval between blood collection and fractionation and the time interval between fractionation and freezing of serum and plasma samples. The glucose level was most sensitive only to the elapsed time between blood collection and centrifugation for blood fractionation. Based on these findings, a simple formula (glucose decrease by 1.387 mg/dL per hour) was derived to estimate the length of time delay after blood collection. In addition, AST, BUN, GGT, and LDH showed sensitive responses to repeated freeze-thaw cycles of serum and plasma samples. These results suggest that GGT and LDH measurements can be used as quality control markers for certain pre-analytical conditions (eg, delayed processing or repeated freeze-thawing) of blood samples which are either directly used in the laboratory tests or stored for future research in the biobank.
Electromagnetic wave theory for boundary-value problems an advanced course on analytical methods
Eom, Hyo J
2004-01-01
Electromagnetic wave theory is based on Maxwell's equations, and electromagnetic boundary-value problems must be solved to understand electromagnetic scattering, propagation, and radiation. Electromagnetic theory finds practical applications in wireless telecommunications and microwave engineering. This book is written as a text for a two-semester graduate course on electromagnetic wave theory. As such, Electromagnetic Wave Theory for Boundary-Value Problems is intended to help students enhance analytic skills by solving pertinent boundary-value problems. In particular, the techniques of Fourier transform, mode matching, and residue calculus are utilized to solve some canonical scattering and radiation problems.
International Nuclear Information System (INIS)
Peters, T.; Fink, S.
2011-01-01
As part of the implementation process for the Next Generation Cesium Extraction Solvent (NGCS), SRNL and F/H Lab performed a series of analytical cross-checks to ensure that the components in the NGCS solvent system do not constitute an undue analytical challenge. For measurement of entrained Isopar(reg s ign) L in aqueous solutions, both labs performed similarly with results more reliable at higher concentrations (near 50 mg/L). Low bias occurred in both labs, as seen previously for comparable blind studies for the baseline solvent system. SRNL recommends consideration to use of Teflon(trademark) caps on all sample containers used for this purpose. For pH measurements, the labs showed reasonable agreement but considerable positive bias for dilute boric acid solutions. SRNL recommends consideration of using an alternate analytical method for qualification of boric acid concentrations.
Hyperbolic systems with analytic coefficients well-posedness of the Cauchy problem
Nishitani, Tatsuo
2014-01-01
This monograph focuses on the well-posedness of the Cauchy problem for linear hyperbolic systems with matrix coefficients. Mainly two questions are discussed: (A) Under which conditions on lower order terms is the Cauchy problem well posed? (B) When is the Cauchy problem well posed for any lower order term? For first order two by two systems with two independent variables with real analytic coefficients, we present complete answers for both (A) and (B). For first order systems with real analytic coefficients we prove general necessary conditions for question (B) in terms of minors of the principal symbols. With regard to sufficient conditions for (B), we introduce hyperbolic systems with nondegenerate characteristics, which contains strictly hyperbolic systems, and prove that the Cauchy problem for hyperbolic systems with nondegenerate characteristics is well posed for any lower order term. We also prove that any hyperbolic system which is close to a hyperbolic system with a nondegenerate characteristic of mu...
Wetterneck, Chad T.; Hart, John M.
2012-01-01
Problems with intimacy and interpersonal issues are exhibited across most psychiatric disorders. However, most of the targets in Cognitive Behavioral Therapy are primarily intrapersonal in nature, with few directly involved in interpersonal functioning and effective intimacy. Functional Analytic Psychotherapy (FAP) provides a behavioral basis for…
Analytical Solution of Nonlinear Problems in Classical Dynamics by Means of Lagrange-Ham
DEFF Research Database (Denmark)
Kimiaeifar, Amin; Mahdavi, S. H; Rabbani, A.
2011-01-01
In this work, a powerful analytical method, called Homotopy Analysis Methods (HAM) is coupled with Lagrange method to obtain the exact solution for nonlinear problems in classic dynamics. In this work, the governing equations are obtained by using Lagrange method, and then the nonlinear governing...
Comment on 'analytic solution of the relativistic Coulomb problem for a spinless Salpeter equation'
International Nuclear Information System (INIS)
Lucha, W.; Schoeberl, F.F.
1994-01-01
We demonstrate that the analytic solution for the set of energy eigenvalues of the semi-relativistic Coulomb problem reported by B. and L. Durand is in clear conflict with an upper bound on the ground-state energy level derived by some straightforward variational procedure. (authors)
An analytical study of the Q(s, S) policy applied to the joint replenishment problem
DEFF Research Database (Denmark)
Nielsen, Christina; Larsen, Christian
2005-01-01
be considered supply chain management problems. The paper uses Markov decision theory to work out an analytical solution procedure to evaluate the costs of a particular Q(s,S) policy, and thereby a method for computing the optimal Q(s,S) policy, under the assumption that demands follow a Poisson Process...
An analytical study of the Q(s,S) policy applied on the joint replenishment problem
DEFF Research Database (Denmark)
Nielsen, Christina; Larsen, Christian
2002-01-01
be considered supply chain management problems. The paper uses Markov decision theory to work out an analytical solution procedure to evaluate the costs of a particular Q(s,S) policy, and thereby a method to compute the optimal Q(s,S) policy, under the assumption that demands follow a Poisson process...
An analytic solution of the static problem of inclined risers conveying fluid
Alfosail, Feras; Nayfeh, Ali H.; Younis, Mohammad I.
2016-01-01
We use the method of matched asymptotic expansion to develop an analytic solution to the static problem of clamped–clamped inclined risers conveying fluid. The inclined riser is modeled as an Euler–Bernoulli beam taking into account its self
OPTIMAL METHOD FOR PREPARATION OF SILICATE ROCK SAMPLES FOR ANALYTICAL PURPOSES
Directory of Open Access Journals (Sweden)
Maja Vrkljan
2004-12-01
Full Text Available The purpose of this study was to determine an optimal dissolution method for silicate rock samples for further analytical purposes. Analytical FAAS method of determining cobalt, chromium, copper, nickel, lead and zinc content in gabbro sample and geochemical standard AGV-1 has been applied for verification. Dissolution in mixtures of various inorganic acids has been tested, as well as Na2CO3 fusion technique. The results obtained by different methods have been compared and dissolution in the mixture of HNO3 + HF has been recommended as optimal.
Adaptive sampling method in deep-penetration particle transport problem
International Nuclear Information System (INIS)
Wang Ruihong; Ji Zhicheng; Pei Lucheng
2012-01-01
Deep-penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, a kind of particle transport random walking system under the emission point as a sampling station is built. Then, an adaptive sampling scheme is derived for better solution with the achieved information. The main advantage of the adaptive scheme is to choose the most suitable sampling number from the emission point station to obtain the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is introduced. Its main principle is to define the importance function due to the particle state and to ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive scheme under the emission point as a station could overcome the difficulty of underestimation of the result in some degree, and the adaptive importance sampling method gets satisfied results as well. (authors)
Distribution-Preserving Stratified Sampling for Learning Problems.
Cervellera, Cristiano; Maccio, Danilo
2017-06-09
The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data. This is obvious for unsupervised learning problems, where the goal is to gain insights on the distribution of the data, but it is also relevant for supervised problems, where the theory explains how the training set distribution influences the generalization error. In this paper, we analyze the technique of stratified sampling from the point of view of distances between probabilities. This allows us to introduce an algorithm, based on recursive binary partition of the input space, aimed at obtaining samples that are distributed as much as possible as the original data. A theoretical analysis is proposed, proving the (greedy) optimality of the procedure together with explicit error bounds. An adaptive version of the algorithm is also introduced to cope with streaming data. Simulation tests on various data sets and different learning tasks are also provided.
Direct sampling methods for inverse elastic scattering problems
Ji, Xia; Liu, Xiaodong; Xi, Yingxia
2018-03-01
We consider the inverse elastic scattering of incident plane compressional and shear waves from the knowledge of the far field patterns. Specifically, three direct sampling methods for location and shape reconstruction are proposed using the different component of the far field patterns. Only inner products are involved in the computation, thus the novel sampling methods are very simple and fast to be implemented. With the help of the factorization of the far field operator, we give a lower bound of the proposed indicator functionals for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functionals decay like the Bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functionals continuously dependent on the far field patterns, which further implies that the novel sampling methods are extremely stable with respect to data error. For the case when the observation directions are restricted into the limited aperture, we firstly introduce some data retrieval techniques to obtain those data that can not be measured directly and then use the proposed direct sampling methods for location and shape reconstructions. Finally, some numerical simulations in two dimensions are conducted with noisy data, and the results further verify the effectiveness and robustness of the proposed sampling methods, even for multiple multiscale cases and limited-aperture problems.
An analytical approach for a nodal scheme of two-dimensional neutron transport problems
International Nuclear Information System (INIS)
Barichello, L.B.; Cabrera, L.C.; Prolo Filho, J.F.
2011-01-01
Research highlights: → Nodal equations for a two-dimensional neutron transport problem. → Analytical Discrete Ordinates Method. → Numerical results compared with the literature. - Abstract: In this work, a solution for a two-dimensional neutron transport problem, in cartesian geometry, is proposed, on the basis of nodal schemes. In this context, one-dimensional equations are generated by an integration process of the multidimensional problem. Here, the integration is performed for the whole domain such that no iterative procedure between nodes is needed. The ADO method is used to develop analytical discrete ordinates solution for the one-dimensional integrated equations, such that final solutions are analytical in terms of the spatial variables. The ADO approach along with a level symmetric quadrature scheme, lead to a significant order reduction of the associated eigenvalues problems. Relations between the averaged fluxes and the unknown fluxes at the boundary are introduced as the usually needed, in nodal schemes, auxiliary equations. Numerical results are presented and compared with test problems.
Directory of Open Access Journals (Sweden)
Zoraida Sosa-Ferrera
2013-01-01
Full Text Available Endocrine-disruptor compounds (EDCs can mimic natural hormones and produce adverse effects in the endocrine functions by interacting with estrogen receptors. EDCs include both natural and synthetic chemicals, such as hormones, personal care products, surfactants, and flame retardants, among others. EDCs are characterised by their ubiquitous presence at trace-level concentrations and their wide diversity. Since the discovery of the adverse effects of these pollutants on wildlife and human health, analytical methods have been developed for their qualitative and quantitative determination. In particular, mass-based analytical methods show excellent sensitivity and precision for their quantification. This paper reviews recently published analytical methodologies for the sample preparation and for the determination of these compounds in different environmental and biological matrices by liquid chromatography coupled with mass spectrometry. The various sample preparation techniques are compared and discussed. In addition, recent developments and advances in this field are presented.
Directory of Open Access Journals (Sweden)
Eloy Yordad Companioni Damas
2009-01-01
Full Text Available This work describes a validation of an analytical procedure for the analysis of petroleum hydrocarbons in marine sediment samples. The proposed protocol is able to measure n-alkanes and polycyclic aromatic hydrocarbons (PAH in samples at concentrations as low as 30 ng/g, with a precision better than 15% for most of analytes. The extraction efficiency of fortified sediments varied from 65.1 to 105.6% and 59.7 to 97.8%, for n-alkanes and PAH in the ranges: C16 - C32 and fluoranthene - benzo(apyrene, respectively. The analytical protocol was applied to determine petroleum hydrocarbons in sediments collected from a marine coastal zone.
Directory of Open Access Journals (Sweden)
S.V. Bystrov
2016-05-01
Full Text Available Subject of Research.We present research results for the signal uncertainty problem that naturally arises for the developers of servomechanisms, including analytical design of serial compensators, delivering the required quality indexes for servomechanisms. Method. The problem was solved with the use of Besekerskiy engineering approach, formulated in 1958. This gave the possibility to reduce requirements for input signal composition of servomechanisms by using only two of their quantitative characteristics, such as maximum speed and acceleration. Information about input signal maximum speed and acceleration allows entering into consideration the equivalent harmonic input signal with calculated amplitude and frequency. In combination with requirements for maximum tracking error, the amplitude and frequency of the equivalent harmonic effects make it possible to estimate analytically the value of the amplitude characteristics of the system by error and then convert it to amplitude characteristic of open-loop system transfer function. While previously Besekerskiy approach was mainly used in relation to the apparatus of logarithmic characteristics, we use this approach for analytical synthesis of consecutive compensators. Main Results. Proposed technique is used to create analytical representation of "input–output" and "error–output" polynomial dynamic models of the designed system. In turn, the desired model of the designed system in the "error–output" form of analytical representation of transfer functions is the basis for the design of consecutive compensator, that delivers the desired placement of state matrix eigenvalues and, consequently, the necessary set of dynamic indexes for the designed system. The given procedure of consecutive compensator analytical design on the basis of Besekerskiy engineering approach under conditions of signal uncertainty is illustrated by an example. Practical Relevance. The obtained theoretical results are
Energy Technology Data Exchange (ETDEWEB)
Zou, Li [Dalian Univ. of Technology, Dalian City (China). State Key Lab. of Structural Analysis for Industrial Equipment; Liang, Songxin; Li, Yawei [Dalian Univ. of Technology, Dalian City (China). School of Mathematical Sciences; Jeffrey, David J. [Univ. of Western Ontario, London (Canada). Dept. of Applied Mathematics
2017-06-01
Nonlinear boundary value problems arise frequently in physical and mechanical sciences. An effective analytic approach with two parameters is first proposed for solving nonlinear boundary value problems. It is demonstrated that solutions given by the two-parameter method are more accurate than solutions given by the Adomian decomposition method (ADM). It is further demonstrated that solutions given by the ADM can also be recovered from the solutions given by the two-parameter method. The effectiveness of this method is demonstrated by solving some nonlinear boundary value problems modeling beam-type nano-electromechanical systems.
International Nuclear Information System (INIS)
Fossum, Kristian; Mannseth, Trond
2014-01-01
We assess the parameter sampling capabilities of some Bayesian, ensemble-based, joint state-parameter (JS) estimation methods. The forward model is assumed to be non-chaotic and have nonlinear components, and the emphasis is on results obtained for the parameters in the state-parameter vector. A variety of approximate sampling methods exist, and a number of numerical comparisons between such methods have been performed. Often, more than one of the defining characteristics vary from one method to another, so it can be difficult to point out which characteristic of the more successful method in such a comparison was decisive. In this study, we single out one defining characteristic for comparison; whether or not data are assimilated sequentially or simultaneously. The current paper is concerned with analytical investigations into this issue. We carefully select one sequential and one simultaneous JS method for the comparison. We also design a corresponding pair of pure parameter estimation methods, and we show how the JS methods and the parameter estimation methods are pairwise related. It is shown that the sequential and the simultaneous parameter estimation methods are equivalent for one particular combination of observations with different degrees of nonlinearity. Strong indications are presented for why one may expect the sequential parameter estimation method to outperform the simultaneous parameter estimation method for all other combinations of observations. Finally, the conditions for when similar relations can be expected to hold between the corresponding JS methods are discussed. A companion paper, part II (Fossum and Mannseth 2014 Inverse Problems 30 114003), is concerned with statistical analysis of results from a range of numerical experiments involving sequential and simultaneous JS estimation, where the design of the numerical investigation is motivated by our findings in the current paper. (paper)
International Nuclear Information System (INIS)
Barnea, N.; Liverts, E.
2010-01-01
In this paper we present an analytic expression for the Lorentz integral transform of an arbitrary response function expressed as a polynomial times a decaying exponent. The resulting expression is applied to the inversion problem of the Lorentz integral transform, simplifying the inversion procedure and improving the accuracy of the procedure. We have presented analytic formulae for a family of basis function often used in the inversion of the LIT function. These formulae allow for an efficient and accurate inversion. The quality and the stability of the resulting inversions were demonstrated through two different examples yielding outstanding results. (author)
DEFF Research Database (Denmark)
Friesel, Anna
2013-01-01
This paper presents the contents and the teaching methods used in the fourth semester course - REG4E - an important subject in engineering, namely Control Theory and Dynamical Systems. Control Theory courses in engineering education are usually related to exercises in the laboratory or to projects....... However, in order to understand complexity of control systems, the students need to possess an analytical understanding of abstract mathematical problems. Our main goal is to illustrate the theory through the robot project, but at the same time we force our students to train their analytical skills...
New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems
International Nuclear Information System (INIS)
Brown, Forrest B.
2016-01-01
Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple a ce.pl and simple a ce m g.pl.
A review of analytical techniques for the determination of carbon-14 in environmental samples
International Nuclear Information System (INIS)
Milton, G.M.; Brown, R.M.
1993-11-01
This report contains a brief summary of analytical techniques commonly used for the determination of radiocarbon in a variety of environmental samples. Details of the applicable procedures developed and tested in the Environmental Research Branch at Chalk River Laboratories are appended
Field, M. Paul; Romaniello, Stephen; Gordon, Gwyneth W.; Anbar, Ariel D.; Herrmann, Achim; Martinez-Boti, Miguel A.; Anagnostou, Eleni; Foster, Gavin L.
2014-05-01
MC-ICP-MS has dramatically improved the analytical throughput for high-precision radiogenic and non-traditional isotope ratio measurements, compared to TIMS. The generation of large data sets, however, remains hampered by tedious manual drip chromatography required for sample purification. A new, automated chromatography system reduces the laboratory bottle neck and expands the utility of high-precision isotope analyses in applications where large data sets are required: geochemistry, forensic anthropology, nuclear forensics, medical research and food authentication. We have developed protocols to automate ion exchange purification for several isotopic systems (B, Ca, Fe, Cu, Zn, Sr, Cd, Pb and U) using the new prepFAST-MC™ (ESI, Nebraska, Omaha). The system is not only inert (all-flouropolymer flow paths), but is also very flexible and can easily facilitate different resins, samples, and reagent types. When programmed, precise and accurate user defined volumes and flow rates are implemented to automatically load samples, wash the column, condition the column and elute fractions. Unattended, the automated, low-pressure ion exchange chromatography system can process up to 60 samples overnight. Excellent reproducibility, reliability, recovery, with low blank and carry over for samples in a variety of different matrices, have been demonstrated to give accurate and precise isotopic ratios within analytical error for several isotopic systems (B, Ca, Fe, Cu, Zn, Sr, Cd, Pb and U). This illustrates the potential of the new prepFAST-MC™ (ESI, Nebraska, Omaha) as a powerful tool in radiogenic and non-traditional isotope research.
Applications of Asymptotic Sampling on High Dimensional Structural Dynamic Problems
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian
2011-01-01
The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has consid...... dimensional reliability problems in structural dynamics.......The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has...... is minimized. Next, the method is applied on different cases of linear and nonlinear systems with a large number of random variables representing the dynamic excitation. The results show that asymptotic sampling is capable of providing good approximations of low failure probability events for very high...
The two-sample problem with induced dependent censorship.
Huang, Y
1999-12-01
Induced dependent censorship is a general phenomenon in health service evaluation studies in which a measure such as quality-adjusted survival time or lifetime medical cost is of interest. We investigate the two-sample problem and propose two classes of nonparametric tests. Based on consistent estimation of the survival function for each sample, the two classes of test statistics examine the cumulative weighted difference in hazard functions and in survival functions. We derive a unified asymptotic null distribution theory and inference procedure. The tests are applied to trial V of the International Breast Cancer Study Group and show that long duration chemotherapy significantly improves time without symptoms of disease and toxicity of treatment as compared with the short duration treatment. Simulation studies demonstrate that the proposed tests, with a wide range of weight choices, perform well under moderate sample sizes.
Simple and Accurate Analytical Solutions of the Electrostatically Actuated Curled Beam Problem
Younis, Mohammad I.
2014-08-17
We present analytical solutions of the electrostatically actuated initially deformed cantilever beam problem. We use a continuous Euler-Bernoulli beam model combined with a single-mode Galerkin approximation. We derive simple analytical expressions for two commonly observed deformed beams configurations: the curled and tilted configurations. The derived analytical formulas are validated by comparing their results to experimental data in the literature and numerical results of a multi-mode reduced order model. The derived expressions do not involve any complicated integrals or complex terms and can be conveniently used by designers for quick, yet accurate, estimations. The formulas are found to yield accurate results for most commonly encountered microbeams of initial tip deflections of few microns. For largely deformed beams, we found that these formulas yield less accurate results due to the limitations of the single-mode approximations they are based on. In such cases, multi-mode reduced order models need to be utilized.
Sample Preparation of Corn Seed Tissue to Prevent Analyte Relocations for Mass Spectrometry Imaging.
Kim, Shin Hye; Kim, Jeongkwon; Lee, Young Jin; Lee, Tae Geol; Yoon, Sohee
2017-08-01
Corn seed tissue sections were prepared by the tape support method using an adhesive tape, and mass spectrometry imaging (MSI) was performed. The effect of heat generated during sample preparation was investigated by time-of-flight secondary mass spectrometry (TOF-SIMS) imaging of corn seed tissue prepared by the tape support and the thaw-mounted methods. Unlike thaw-mounted sample preparation, the tape support method does not cause imaging distortion because of the absence of heat, which can cause migration of the analytes on the sample. By applying the tape-support method, the corn seed tissue was prepared without structural damage and MSI with accurate spatial information of analytes was successfully performed. Graphical Abstract ᅟ.
Analytical procedures for determining Pb and Sr isotopic compositions in water samples by ID-TIMS
Directory of Open Access Journals (Sweden)
Veridiana Martins
2008-01-01
Full Text Available Few articles deal with lead and strontium isotopic analysis of water samples. The aim of this study was to define the chemical procedures for Pb and Sr isotopic analyses of groundwater samples from an urban sedimentary aquifer. Thirty lead and fourteen strontium isotopic analyses were performed to test different analytical procedures. Pb and Sr isotopic ratios as well as Sr concentration did not vary using different chemical procedures. However, the Pb concentrations were very dependent on the different procedures. Therefore, the choice of the best analytical procedure was based on the Pb results, which indicated a higher reproducibility from samples that had been filtered and acidified before the evaporation, had their residues totally dissolved, and were purified by ion chromatography using the Biorad® column. Our results showed no changes in Pb ratios with the storage time.
Sample Preparation of Corn Seed Tissue to Prevent Analyte Relocations for Mass Spectrometry Imaging
Kim, Shin Hye; Kim, Jeongkwon; Lee, Young Jin; Lee, Tae Geol; Yoon, Sohee
2017-08-01
Corn seed tissue sections were prepared by the tape support method using an adhesive tape, and mass spectrometry imaging (MSI) was performed. The effect of heat generated during sample preparation was investigated by time-of-flight secondary mass spectrometry (TOF-SIMS) imaging of corn seed tissue prepared by the tape support and the thaw-mounted methods. Unlike thaw-mounted sample preparation, the tape support method does not cause imaging distortion because of the absence of heat, which can cause migration of the analytes on the sample. By applying the tape-support method, the corn seed tissue was prepared without structural damage and MSI with accurate spatial information of analytes was successfully performed.
An analytical nodal method for time-dependent one-dimensional discrete ordinates problems
International Nuclear Information System (INIS)
Barros, R.C. de
1992-01-01
In recent years, relatively little work has been done in developing time-dependent discrete ordinates (S N ) computer codes. Therefore, the topic of time integration methods certainly deserves further attention. In this paper, we describe a new coarse-mesh method for time-dependent monoenergetic S N transport problesm in slab geometry. This numerical method preserves the analytic solution of the transverse-integrated S N nodal equations by constants, so we call our method the analytical constant nodal (ACN) method. For time-independent S N problems in finite slab geometry and for time-dependent infinite-medium S N problems, the ACN method generates numerical solutions that are completely free of truncation errors. Bsed on this positive feature, we expect the ACN method to be more accurate than conventional numerical methods for S N transport calculations on coarse space-time grids
An analytical method for the inverse Cauchy problem of Lame equation in a rectangle
Grigor’ev, Yu
2018-04-01
In this paper, we present an analytical computational method for the inverse Cauchy problem of Lame equation in the elasticity theory. A rectangular domain is frequently used in engineering structures and we only consider the analytical solution in a two-dimensional rectangle, wherein a missing boundary condition is recovered from the full measurement of stresses and displacements on an accessible boundary. The essence of the method consists in solving three independent Cauchy problems for the Laplace and Poisson equations. For each of them, the Fourier series is used to formulate a first-kind Fredholm integral equation for the unknown function of data. Then, we use a Lavrentiev regularization method, and the termwise separable property of kernel function allows us to obtain a closed-form regularized solution. As a result, for the displacement components, we obtain solutions in the form of a sum of series with three regularization parameters. The uniform convergence and error estimation of the regularized solutions are proved.
Inverse problems with non-trivial priors: efficient solution through sequential Gibbs sampling
DEFF Research Database (Denmark)
Hansen, Thomas Mejer; Cordua, Knud Skou; Mosegaard, Klaus
2012-01-01
Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample solutions to non-linear inverse problems. In principle, these methods allow incorporation of prior information of arbitrary complexity. If an analytical closed form description of the prior...... is available, which is the case when the prior can be described by a multidimensional Gaussian distribution, such prior information can easily be considered. In reality, prior information is often more complex than can be described by the Gaussian model, and no closed form expression of the prior can be given....... We propose an algorithm, called sequential Gibbs sampling, allowing the Metropolis algorithm to efficiently incorporate complex priors into the solution of an inverse problem, also for the case where no closed form description of the prior exists. First, we lay out the theoretical background...
Rapid Gamma Screening of Shipments of Analytical Samples to Meet DOT Regulations
International Nuclear Information System (INIS)
Wojtaszek, P.A.; Remington, D.L.; Ideker-Mulligan, V.
2006-01-01
The accelerated closure program at Rocky Flats required the capacity to ship up to 1000 analytical samples per week to off-site commercial laboratories, and to conduct such shipment within 24 hours of sample collection. During a period of near peak activity in the closure project, a regulatory change significantly increased the level of radionuclide data required for shipment of each package. In order to meet these dual challenges, a centralized and streamlined sample management program was developed which channeled analytical samples through a single, high-throughput radiological screening facility. This trailerized facility utilized high purity germanium (HPGe) gamma spectrometers to conduct screening measurements of entire packages of samples at once, greatly increasing throughput compared to previous methods. The In Situ Object Counting System (ISOCS) was employed to calibrate the HPGe systems to accommodate the widely varied sample matrices and packing configurations encountered. Optimum modeling and configuration parameters were determined. Accuracy of the measurements of grouped sample jars was confirmed with blind samples in multiple configurations. Levels of radionuclides not observable by gamma spectroscopy were calculated utilizing a spreadsheet program that can accommodate isotopic ratios for large numbers of different waste streams based upon acceptable knowledge. This program integrated all radionuclide data and output all information required for shipment, including the shipping class of the package. (authors)
Tank 214-AW-105, grab samples, analytical results for the final report
International Nuclear Information System (INIS)
Esch, R.A.
1997-01-01
This document is the final report for tank 241-AW-105 grab samples. Twenty grabs samples were collected from risers 10A and 15A on August 20 and 21, 1996, of which eight were designated for the K Basin sludge compatibility and mixing studies. This document presents the analytical results for the remaining twelve samples. Analyses were performed in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DO). The results for the previous sampling of this tank were reported in WHC-SD-WM-DP-149, Rev. 0, 60-Day Waste Compatibility Safety Issue and Final Results for Tank 241-A W-105, Grab Samples 5A W-95-1, 5A W-95-2 and 5A W-95-3. Three supernate samples exceeded the TOC notification limit (30,000 microg C/g dry weight). Appropriate notifications were made. No immediate notifications were required for any other analyte. The TSAP requested analyses for polychlorinated biphenyls (PCB) for all liquids and centrifuged solid subsamples. The PCB analysis of the liquid samples has been delayed and will be presented in a revision to this document
Integrated assessment of the global warming problem: A decision-analytical approach
International Nuclear Information System (INIS)
Van Lenthe, J.; Hendrickx, L.; Vlek, C.A.J.
1994-12-01
The multi-disciplinary character of the global warming problem asks for an integrated assessment approach for ordering and combining the various physical, ecological, economical, and sociological results. The Netherlands initiated their own National Research Program (NRP) on Global Air Pollution and Climate Change (NRP). The first phase (NRP-1) identified the integration theme as one of five central research themes. The second phase (NRP-2) shows a growing concern for integrated assessment issues. The current two-year research project 'Characterizing the risks: a comparative analysis of the risks of global warming and of relevant policy options, which started in September 1993, comes under the integrated assessment part of the Dutch NRP. The first part of the interim report describes the search for an integrated assessment methodology. It starts with emphasizing the need for integrated assessment at a relatively high level of aggregation and from a policy point of view. The conclusion will be that a decision-analytical approach might fit the purpose of a policy-oriented integrated modeling of the global warming problem. The discussion proceeds with an account on decision analysis and its explicit incorporation and analysis of uncertainty. Then influence diagrams, a relatively recent development in decision analysis, are introduced as a useful decision-analytical approach for integrated assessment. Finally, a software environment for creating and analyzing complex influence diagram models is discussed. The second part of the interim report provides a first, provisional integrated modeling of the global warming problem, emphasizing on the illustration of the decision-analytical approach. Major problem elements are identified and an initial problem structure is developed. The problem structure is described in terms of hierarchical influence diagrams. At some places the qualitative structure is filled with quantitative data
An approximate and an analytical solution to the carousel-pendulum problem
Energy Technology Data Exchange (ETDEWEB)
Vial, Alexandre [Pole Physique, Mecanique, Materiaux et Nanotechnologies, Universite de technologie de Troyes, 12, rue Marie Curie BP-2060, F-10010 Troyes Cedex (France)], E-mail: alexandre.vial@utt.fr
2009-09-15
We show that an improved solution to the carousel-pendulum problem can be easily obtained through a first-order Taylor expansion, and its accuracy is determined after the obtention of an unusable analytical exact solution, advantageously replaced by a numerical one. It is shown that the accuracy is unexpectedly high, even when the ratio length of the pendulum to carousel radius approaches unity. (letters and comments)
A direct sampling method to an inverse medium scattering problem
Ito, Kazufumi
2012-01-10
In this work we present a novel sampling method for time harmonic inverse medium scattering problems. It provides a simple tool to directly estimate the shape of the unknown scatterers (inhomogeneous media), and it is applicable even when the measured data are only available for one or two incident directions. A mathematical derivation is provided for its validation. Two- and three-dimensional numerical simulations are presented, which show that the method is accurate even with a few sets of scattered field data, computationally efficient, and very robust with respect to noises in the data. © 2012 IOP Publishing Ltd.
Analytical characterization using surface-enhanced Raman scattering (SERS) and microfluidic sampling
International Nuclear Information System (INIS)
Wang, Chao; Yu, Chenxu
2015-01-01
With the rapid development of analytical techniques, it has become much easier to detect chemical and biological analytes, even at very low detection limits. In recent years, techniques based on vibrational spectroscopy, such as surface enhanced Raman spectroscopy (SERS), have been developed for non-destructive detection of pathogenic microorganisms. SERS is a highly sensitive analytical tool that can be used to characterize chemical and biological analytes interacting with SERS-active substrates. However, it has always been a challenge to obtain consistent and reproducible SERS spectroscopic results at complicated experimental conditions. Microfluidics, a tool for highly precise manipulation of small volume liquid samples, can be used to overcome the major drawbacks of SERS-based techniques. High reproducibility of SERS measurement could be obtained in continuous flow generated inside microfluidic devices. This article provides a thorough review of the principles, concepts and methods of SERS-microfluidic platforms, and the applications of such platforms in trace analysis of chemical and biological analytes. (topical review)
Problem of the Moving Boundary in Continuous Casting Solved by The Analytic-Numerical Method
Directory of Open Access Journals (Sweden)
Grzymkowski R.
2013-03-01
Full Text Available Mathematical modeling of thermal processes combined with the reversible phase transitions of type: solid phase - liquid phase leads to formulation of the parabolic or elliptic moving boundary problem. Solution of such defined problem requires, most often, to use some sophisticated numerical techniques and far advanced mathematical tools. The paper presents an analytic-numerical method, especially attractive from the engineer’s point of view, applied for finding the approximate solutions of the selected class of problems which can be reduced to the one-phase solidification problem of a plate with the unknown a priori, varying in time boundary of the region in which the solution is sought. Proposed method is based on the known formalism of initial expansion of a sought function, describing the field of temperature, into the power series, some coefficients of which are determined with the aid of boundary conditions, and on the approximation of a function defining the freezing front location with the broken line, parameters of which are determined numerically. The method represents a combination of the analytical and numerical techniques and seems to be an effective and relatively easy in using tool for solving problems of considered kind.
Problem of the Moving Boundary in Continuous Casting Solved by the Analytic-Numerical Method
Directory of Open Access Journals (Sweden)
R. Grzymkowski
2013-01-01
Full Text Available Mathematical modeling of thermal processes combined with the reversible phase transitions of type: solid phase – liquid phase leads to formulation of the parabolic or elliptic moving boundary problem. Solution of such defined problem requires, most often, to use some sophisticated numerical techniques and far advanced mathematical tools. The paper presents an analytic-numerical method, especially attractive from the engineer’s point of view, applied for finding the approximate solutions of the selected class of problems which can be reduced to the one-phase solidification problem of a plate with the unknown a priori, varying in time boundary of the region in which the solution is sought. Proposed method is based on the known formalism of initial expansion of a sought function, describing the field of temperature, into the power series, some coefficients of which are determined with the aid of boundary conditions, and on the approximation of a function defining the freezing front location with the broken line, parameters of which are determined numerically. The method represents a combination of the analytical and numerical techniques and seems to be an effective and relatively easy in using tool for solving problems of considered kind.
Recent advances in computational-analytical integral transforms for convection-diffusion problems
Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.
2017-10-01
An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.
Salgueiro-González, N; Muniategui-Lorenzo, S; López-Mahía, P; Prada-Rodríguez, D
2017-04-15
In the last decade, the impact of alkylphenols and bisphenol A in the aquatic environment has been widely evaluated because of their high use in industrial and household applications as well as their toxicological effects. These compounds are well-known endocrine disrupting compounds (EDCs) which can affect the hormonal system of humans and wildlife, even at low concentrations. Due to the fact that these pollutants enter into the environment through waters, and it is the most affected compartment, analytical methods which allow the determination of these compounds in aqueous samples at low levels are mandatory. In this review, an overview of the most significant advances in the analytical methodologies for the determination of alkylphenols and bisphenol A in waters is considered (from 2002 to the present). Sample handling and instrumental detection strategies are critically discussed, including analytical parameters related to quality assurance and quality control (QA/QC). Special attention is paid to miniaturized sample preparation methodologies and approaches proposed to reduce time- and reagents consumption according to Green Chemistry principles, which have increased in the last five years. Finally, relevant applications of these methods to the analysis of water samples are examined, being wastewater and surface water the most investigated. Copyright © 2017 Elsevier B.V. All rights reserved.
Lau, Chun Sing
This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in
Testing Homogeneity in a Semiparametric Two-Sample Problem
Directory of Open Access Journals (Sweden)
Yukun Liu
2012-01-01
Full Text Available We study a two-sample homogeneity testing problem, in which one sample comes from a population with density f(x and the other is from a mixture population with mixture density (1−λf(x+λg(x. This problem arises naturally from many statistical applications such as test for partial differential gene expression in microarray study or genetic studies for gene mutation. Under the semiparametric assumption g(x=f(xeα+βx, a penalized empirical likelihood ratio test could be constructed, but its implementation is hindered by the fact that there is neither feasible algorithm for computing the test statistic nor available research results on its theoretical properties. To circumvent these difficulties, we propose an EM test based on the penalized empirical likelihood. We prove that the EM test has a simple chi-square limiting distribution, and we also demonstrate its competitive testing performances by simulations. A real-data example is used to illustrate the proposed methodology.
How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?
Directory of Open Access Journals (Sweden)
Brady T West
Full Text Available Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT, which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.
How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?
West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.
2016-01-01
Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817
Problem Gambling in a Sample of Older Adult Casino Gamblers.
van der Maas, Mark; Mann, Robert E; McCready, John; Matheson, Flora I; Turner, Nigel E; Hamilton, Hayley A; Schrans, Tracy; Ialomiteanu, Anca
2017-01-01
As older adults continue to make up a greater proportion of the Canadian population, it becomes more important to understand the implications that their leisure activities have for their physical and mental health. Gambling, in particular, is a form of leisure that is becoming more widely available and has important implications for the mental health and financial well-being of older adults. This study examines a large sample (2103) of casino-going Ontarian adults over the age of 55 and identifies those features of their gambling participation that are associated with problem gambling. Logistic regression analysis is used to analyze the data. Focusing on types of gambling participated in and motivations for visiting the casino, this study finds that several forms of gambling and motivations to gamble are associated with greater risk of problem gambling. It also finds that some motivations are associated with lower risk of problem gambling. The findings of this study have implications related to gambling availability within an aging population.
Gu, Zhi-Yuan; Yang, Cheng-Xiong; Chang, Na; Yan, Xiu-Ping
2012-05-15
In modern analytical chemistry researchers pursue novel materials to meet analytical challenges such as improvements in sensitivity, selectivity, and detection limit. Metal-organic frameworks (MOFs) are an emerging class of microporous materials, and their unusual properties such as high surface area, good thermal stability, uniform structured nanoscale cavities, and the availability of in-pore functionality and outer-surface modification are attractive for diverse analytical applications. This Account summarizes our research on the analytical applications of MOFs ranging from sampling to chromatographic separation. MOFs have been either directly used or engineered to meet the demands of various analytical applications. Bulk MOFs with microsized crystals are convenient sorbents for direct application to in-field sampling and solid-phase extraction. Quartz tubes packed with MOF-5 have shown excellent stability, adsorption efficiency, and reproducibility for in-field sampling and trapping of atmospheric formaldehyde. The 2D copper(II) isonicotinate packed microcolumn has demonstrated large enhancement factors and good shape- and size-selectivity when applied to on-line solid-phase extraction of polycyclic aromatic hydrocarbons in water samples. We have explored the molecular sieving effect of MOFs for the efficient enrichment of peptides with simultaneous exclusion of proteins from biological fluids. These results show promise for the future of MOFs in peptidomics research. Moreover, nanosized MOFs and engineered thin films of MOFs are promising materials as novel coatings for solid-phase microextraction. We have developed an in situ hydrothermal growth approach to fabricate thin films of MOF-199 on etched stainless steel wire for solid-phase microextraction of volatile benzene homologues with large enhancement factors and wide linearity. Their high thermal stability and easy-to-engineer nanocrystals make MOFs attractive as new stationary phases to fabricate MOF
Directory of Open Access Journals (Sweden)
Abuzar Kabir
2017-12-01
Full Text Available Sample preparation has been recognized as a major step in the chemical analysis workflow. As such, substantial efforts have been made in recent years to simplify the overall sample preparation process. Major focusses of these efforts have included miniaturization of the extraction device; minimizing/eliminating toxic and hazardous organic solvent consumption; eliminating sample pre-treatment and post-treatment steps; reducing the sample volume requirement; reducing extraction equilibrium time, maximizing extraction efficiency etc. All these improved attributes are congruent with the Green Analytical Chemistry (GAC principles. Classical sample preparation techniques such as solid phase extraction (SPE and liquid-liquid extraction (LLE are being rapidly replaced with emerging miniaturized and environmentally friendly techniques such as Solid Phase Micro Extraction (SPME, Stir bar Sorptive Extraction (SBSE, Micro Extraction by Packed Sorbent (MEPS, Fabric Phase Sorptive Extraction (FPSE, and Dispersive Liquid-Liquid Micro Extraction (DLLME. In addition to the development of many new generic extraction sorbents in recent years, a large number of molecularly imprinted polymers (MIPs created using different template molecules have also enriched the large cache of microextraction sorbents. Application of nanoparticles as high-performance extraction sorbents has undoubtedly elevated the extraction efficiency and method sensitivity of modern chromatographic analyses to a new level. Combining magnetic nanoparticles with many microextraction sorbents has opened up new possibilities to extract target analytes from sample matrices containing high volumes of matrix interferents. The aim of the current review is to critically audit the progress of microextraction techniques in recent years, which has indisputably transformed the analytical chemistry practices, from biological and therapeutic drug monitoring to the environmental field; from foods to phyto
International Nuclear Information System (INIS)
Goodfellow, G.I.
1978-01-01
The steam and water in CEGB Magnox and AGR nuclear boilers are continuously monitored, using both laboratory techniques and on-line instrumentation, in order to maintain the chemical quality within pre-determined limits. The sampling systems in use and some of the difficulties associated with sampling requirements are discussed. The relative merits of chemical instruments installed either locally in various parts of the plant or in centralized instrument rooms are reviewed. The quality of water in nuclear boilers, as with all high-pressure steam-raising plant, is extremely high; consequently very sensitive analytical procedures are required, particularly for monitoring the feed-water of 'once-through boiler' systems. Considerable progress has been made in this field and examples are given of some of the techniques developed for analyses at the 'μ/kg' level together with some of the current problems.(author)
Analytical study on the determination of boron in environmental water samples
International Nuclear Information System (INIS)
Lopez, F.J.; Gimenez, E.; Hernandez, F.
1993-01-01
An analytical study on the determination of boron in environmental water samples was carried out. The curcumin and carmine standard methods were compared with the most recent Azomethine-H method in order to evaluate their analytical characteristics and feasibility for the analysis of boron in water samples. Analyses of synthetic water, ground water, sea water and waste water samples were carried out and a statistical evaluation of the results was made. The Azomethine-H method was found to be the most sensitive (detection limit 0.02 mg l -1 ) and selective (no interference of commonly occurring ions in water was observed), showing also the best precision (relative standard deviation lower than 4%). Moreover, it gave good results for all types of samples analyzed. The accuracy of this method was tested by the addition of known amounts of standard solutions to different types of water samples. The slopes of standard additions and direct calibration graphs were similar and recoveries of added boron ranged from 99 to 107%. (orig.)
Statistical Mechanics of a Simplified Bipartite Matching Problem: An Analytical Treatment
Dell'Erba, Matías Germán
2012-03-01
We perform an analytical study of a simplified bipartite matching problem in which there exists a constant matching energy, and both heterosexual and homosexual pairings are allowed. We obtain the partition function in a closed analytical form and we calculate the corresponding thermodynamic functions of this model. We conclude that the model is favored at high temperatures, for which the probabilities of heterosexual and homosexual pairs tend to become equal. In the limits of low and high temperatures, the system is extensive, however this property is lost in the general case. There exists a relation between the matching energies for which the system becomes more stable under external (thermal) perturbations. As the difference of energies between the two possible matches increases the system becomes more ordered, while the maximum of entropy is achieved when these energies are equal. In this limit, there is a first order phase transition between two phases with constant entropy.
Analytical results from salt batch 9 routine DSSHT and SEHT monthly samples
Energy Technology Data Exchange (ETDEWEB)
Peters, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-06-01
Strip Effluent Hold Tank (SEHT) and Decontaminated Salt Solution Hold Tank (DSSHT) samples from several of the “microbatches” of Integrated Salt Disposition Project (ISDP) Salt Batch (“Macrobatch”) 9 have been analyzed for ^{238}Pu, ^{90}Sr, ^{137}Cs, cations (Inductively Coupled Plasma Emission Spectroscopy - ICPES), and anions (Ion Chromatography Anions - IC-A). The analytical results from the current microbatch samples are similar to those from previous macrobatch samples. The Cs removal continues to be acceptable, with decontamination factors (DF) averaging 25700 (107% RSD). The bulk chemistry of the DSSHT and SEHT samples do not show any signs of unusual behavior, other than lacking the anticipated degree of dilution that is calculated to occur during Modular Caustic-Side Solvent Extraction Unit (MCU) processing.
International Nuclear Information System (INIS)
1995-01-01
The 4. Poznan Analytical Seminar on Modern Methods of Sample Preparation and Trace Amounts Determination of Elements has been held in Poznan 27-28 April 1995. The new versions of analytical methods have been presented for quantitative determination of trace elements in biological, environmental and geological materials. Also the number of special techniques for sample preparation enables achievement the best precision of analytical results have been shown and discussed
Energy Technology Data Exchange (ETDEWEB)
Spoerl, Andreas
2008-06-05
Quantum computers are one of the next technological steps in modern computer science. Some of the relevant questions that arise when it comes to the implementation of quantum operations (as building blocks in a quantum algorithm) or the simulation of quantum systems are studied. Numerical results are gathered for variety of systems, e.g. NMR systems, Josephson junctions and others. To study quantum operations (e.g. the quantum fourier transform, swap operations or multiply-controlled NOT operations) on systems containing many qubits, a parallel C++ code was developed and optimised. In addition to performing high quality operations, a closer look was given to the minimal times required to implement certain quantum operations. These times represent an interesting quantity for the experimenter as well as for the mathematician. The former tries to fight dissipative effects with fast implementations, while the latter draws conclusions in the form of analytical solutions. Dissipative effects can even be included in the optimisation. The resulting solutions are relaxation and time optimised. For systems containing 3 linearly coupled spin (1)/(2) qubits, analytical solutions are known for several problems, e.g. indirect Ising couplings and trilinear operations. A further study was made to investigate whether there exists a sufficient set of criteria to identify systems with dynamics which are invertible under local operations. Finally, a full quantum algorithm to distinguish between two knots was implemented on a spin(1)/(2) system. All operations for this experiment were calculated analytically. The experimental results coincide with the theoretical expectations. (orig.)
Sample problem calculations related to two-phase flow transients in a PWR relief-piping network
International Nuclear Information System (INIS)
Shin, Y.W.; Wiedermann, A.H.
1981-03-01
Two sample problems related with the fast transients of water/steam flow in the relief line of a PWR pressurizer were calculated with a network-flow analysis computer code STAC (System Transient-Flow Analysis Code). The sample problems were supplied by EPRI and are designed to test computer codes or computational methods to determine whether they have the basic capability to handle the important flow features present in a typical relief line of a PWR pressurizer. It was found necessary to implement into the STAC code a number of additional boundary conditions in order to calculate the sample problems. This includes the dynamics of the fluid interface that is treated as a moving boundary. This report describes the methodologies adopted for handling the newly implemented boundary conditions and the computational results of the two sample problems. In order to demonstrate the accuracies achieved in the STAC code results, analytical solutions are also obtained and used as a basis for comparison
Analytical Parameters of an Amperometric Glucose Biosensor for Fast Analysis in Food Samples
Directory of Open Access Journals (Sweden)
Margalida Artigues
2017-11-01
Full Text Available Amperometric biosensors based on the use of glucose oxidase (GOx are able to combine the robustness of electrochemical techniques with the specificity of biological recognition processes. However, very little information can be found in literature about the fundamental analytical parameters of these sensors. In this work, the analytical behavior of an amperometric biosensor based on the immobilization of GOx using a hydrogel (Chitosan onto highly ordered titanium dioxide nanotube arrays (TiO2NTAs has been evaluated. The GOx–Chitosan/TiO2NTAs biosensor showed a sensitivity of 5.46 μA·mM−1 with a linear range from 0.3 to 1.5 mM; its fundamental analytical parameters were studied using a commercial soft drink. The obtained results proved sufficient repeatability (RSD = 1.9%, reproducibility (RSD = 2.5%, accuracy (95–105% recovery, and robustness (RSD = 3.3%. Furthermore, no significant interferences from fructose, ascorbic acid and citric acid were obtained. In addition, the storage stability was further examined, after 30 days, the GOx–Chitosan/TiO2NTAs biosensor retained 85% of its initial current response. Finally, the glucose content of different food samples was measured using the biosensor and compared with the respective HPLC value. In the worst scenario, a deviation smaller than 10% was obtained among the 20 samples evaluated.
Analytical Parameters of an Amperometric Glucose Biosensor for Fast Analysis in Food Samples
2017-01-01
Amperometric biosensors based on the use of glucose oxidase (GOx) are able to combine the robustness of electrochemical techniques with the specificity of biological recognition processes. However, very little information can be found in literature about the fundamental analytical parameters of these sensors. In this work, the analytical behavior of an amperometric biosensor based on the immobilization of GOx using a hydrogel (Chitosan) onto highly ordered titanium dioxide nanotube arrays (TiO2NTAs) has been evaluated. The GOx–Chitosan/TiO2NTAs biosensor showed a sensitivity of 5.46 μA·mM−1 with a linear range from 0.3 to 1.5 mM; its fundamental analytical parameters were studied using a commercial soft drink. The obtained results proved sufficient repeatability (RSD = 1.9%), reproducibility (RSD = 2.5%), accuracy (95–105% recovery), and robustness (RSD = 3.3%). Furthermore, no significant interferences from fructose, ascorbic acid and citric acid were obtained. In addition, the storage stability was further examined, after 30 days, the GOx–Chitosan/TiO2NTAs biosensor retained 85% of its initial current response. Finally, the glucose content of different food samples was measured using the biosensor and compared with the respective HPLC value. In the worst scenario, a deviation smaller than 10% was obtained among the 20 samples evaluated. PMID:29135931
International Nuclear Information System (INIS)
Montaser, A.
1993-01-01
In this research, new high-temperature plasmas and new sample introduction systems are explored for rapid elemental and isotopic analysis of gases, solutions, and solids using mass spectrometry and atomic emission spectrometry. During the period January 1993--December 1993, emphasis was placed on (a) analytical investigations of atmospheric-pressure helium inductively coupled plasma (He ICP) that are suitable for atomization, excitation, and ionization of elements possessing high excitation and ionization energies; (b) simulation and computer modeling of plasma sources to predict their structure and fundamental and analytical properties without incurring the enormous cost of experimental studies; (c) spectrosopic imaging and diagnostic studies of high-temperature plasmas; (d) fundamental studies of He ICP discharges and argon-nitrogen plasma by high-resolution Fourier transform spectrometry; and (e) fundamental and analytical investigation of new, low-cost devices as sample introduction systems for atomic spectrometry and examination of new diagnostic techniques for probing aerosols. Only the most important achievements are included in this report to illustrate progress and obstacles. Detailed descriptions of the authors' investigations are outlined in the reprints and preprints that accompany this report. The technical progress expected next year is briefly described at the end of this report
Analytical Parameters of an Amperometric Glucose Biosensor for Fast Analysis in Food Samples.
Artigues, Margalida; Abellà, Jordi; Colominas, Sergi
2017-11-14
Amperometric biosensors based on the use of glucose oxidase (GOx) are able to combine the robustness of electrochemical techniques with the specificity of biological recognition processes. However, very little information can be found in literature about the fundamental analytical parameters of these sensors. In this work, the analytical behavior of an amperometric biosensor based on the immobilization of GOx using a hydrogel (Chitosan) onto highly ordered titanium dioxide nanotube arrays (TiO₂NTAs) has been evaluated. The GOx-Chitosan/TiO₂NTAs biosensor showed a sensitivity of 5.46 μA·mM -1 with a linear range from 0.3 to 1.5 mM; its fundamental analytical parameters were studied using a commercial soft drink. The obtained results proved sufficient repeatability (RSD = 1.9%), reproducibility (RSD = 2.5%), accuracy (95-105% recovery), and robustness (RSD = 3.3%). Furthermore, no significant interferences from fructose, ascorbic acid and citric acid were obtained. In addition, the storage stability was further examined, after 30 days, the GOx-Chitosan/TiO₂NTAs biosensor retained 85% of its initial current response. Finally, the glucose content of different food samples was measured using the biosensor and compared with the respective HPLC value. In the worst scenario, a deviation smaller than 10% was obtained among the 20 samples evaluated.
DEFF Research Database (Denmark)
Ejegod, Ditte Møller; Pedersen, Helle; Alzua, Garazi Peña
2018-01-01
As a new initiative, HPV self-sampling to non-attenders using the dry Evalyn self-sampling brush is offered in the Capital Region of Denmark. The use of a dry brush is largely uncharted territory in terms of analytical stability. In this study we aim to provide evidence on the analytical quality...
International Nuclear Information System (INIS)
Boisseau, Bruno; Forgacs, Peter; Giacomini, Hector
2007-01-01
A new (algebraic) approximation scheme to find global solutions of two-point boundary value problems of ordinary differential equations (ODEs) is presented. The method is applicable for both linear and nonlinear (coupled) ODEs whose solutions are analytic near one of the boundary points. It is based on replacing the original ODEs by a sequence of auxiliary first-order polynomial ODEs with constant coefficients. The coefficients in the auxiliary ODEs are uniquely determined from the local behaviour of the solution in the neighbourhood of one of the boundary points. The problem of obtaining the parameters of the global (connecting) solutions, analytic at one of the boundary points, reduces to find the appropriate zeros of algebraic equations. The power of the method is illustrated by computing the approximate values of the 'connecting parameters' for a number of nonlinear ODEs arising in various problems in field theory. We treat in particular the static and rotationally symmetric global vortex, the skyrmion, the Abrikosov-Nielsen-Olesen vortex, as well as the 't Hooft-Polyakov magnetic monopole. The total energy of the skyrmion and of the monopole is also computed by the new method. We also consider some ODEs coming from the exact renormalization group. The ground-state energy level of the anharmonic oscillator is also computed for arbitrary coupling strengths with good precision. (fast track communication)
An Analytical Model for Multilayer Well Production Evaluation to Overcome Cross-Flow Problem
Hakiki, Farizal; Wibowo, Aris T.; Rahmawati, Silvya D.; Yasutra, Amega; Sukarno, Pudjo
2017-01-01
One of the major concerns in a multi-layer system is that interlayer cross-flow may occur if reservoir fluids are produced from commingled layers that have unequal initial pressures. Reservoir would commonly have bigger average reservoir pressure (pore fluid pressure) as it goes deeper. The phenomenon is, however, not followed by the reservoir productivity or injectivity. The existence of reservoir with quite low average-pressure and high injectivity would tend experiencing the cross-flow problem. It is a phenomenon of fluid from bottom layer flowing into upper layer. It would strict upper-layer fluid to flow into wellbore. It is as if there is an injection treatment from bottom layer. The study deploys productivity index an approach parameter taking into account of cross-flow problem instead of injectivity index since it is a production well. The analytical study is to model the reservoir multilayer by addressing to avoid cross-flow problem. The analytical model employed hypothetical and real field data to test it. The scope of this study are: (a) Develop mathematical-based solution to determine the production rate from each layer; (b) Assess different scenarios to optimize production rate, those are: pump setting depth and performance of in-situ choke (ISC) installation. The ISC is acting as an inflow control device (ICD) alike that help to reduce cross-flow occurrence. This study employed macro program to write the code and develop the interface. Fast iterative procedure happens on solving the analytical model. Comparison results recognized that the mathematical-based solution shows a good agreement with the commercial software derived results.
An Analytical Model for Multilayer Well Production Evaluation to Overcome Cross-Flow Problem
Hakiki, Farizal
2017-10-17
One of the major concerns in a multi-layer system is that interlayer cross-flow may occur if reservoir fluids are produced from commingled layers that have unequal initial pressures. Reservoir would commonly have bigger average reservoir pressure (pore fluid pressure) as it goes deeper. The phenomenon is, however, not followed by the reservoir productivity or injectivity. The existence of reservoir with quite low average-pressure and high injectivity would tend experiencing the cross-flow problem. It is a phenomenon of fluid from bottom layer flowing into upper layer. It would strict upper-layer fluid to flow into wellbore. It is as if there is an injection treatment from bottom layer. The study deploys productivity index an approach parameter taking into account of cross-flow problem instead of injectivity index since it is a production well. The analytical study is to model the reservoir multilayer by addressing to avoid cross-flow problem. The analytical model employed hypothetical and real field data to test it. The scope of this study are: (a) Develop mathematical-based solution to determine the production rate from each layer; (b) Assess different scenarios to optimize production rate, those are: pump setting depth and performance of in-situ choke (ISC) installation. The ISC is acting as an inflow control device (ICD) alike that help to reduce cross-flow occurrence. This study employed macro program to write the code and develop the interface. Fast iterative procedure happens on solving the analytical model. Comparison results recognized that the mathematical-based solution shows a good agreement with the commercial software derived results.
A rapid and sensitive analytical method for the determination of 14 pyrethroids in water samples.
Feo, M L; Eljarrat, E; Barceló, D
2010-04-09
A simple, efficient and environmentally friendly analytical methodology is proposed for extracting and preconcentrating pyrethroids from water samples prior to gas chromatography-negative ion chemical ionization mass spectrometry (GC-NCI-MS) analysis. Fourteen pyrethroids were selected for this work: bifenthrin, cyfluthrin, lambda-cyhalothrin, cypermethrin, deltamethrin, esfenvalerate, fenvalerate, fenpropathrin, tau-fluvalinate, permethrin, phenothrin, resmethrin, tetramethrin and tralomethrin. The method is based on ultrasound-assisted emulsification-extraction (UAEE) of a water-immiscible solvent in an aqueous medium. Chloroform was used as extraction solvent in the UAEE technique. Target analytes were quantitatively extracted achieving an enrichment factor of 200 when 20 mL aliquot of pure water spiked with pyrethroid standards was extracted. The method was also evaluated with tap water and river water samples. Method detection limits (MDLs) ranged from 0.03 to 35.8 ng L(-1) with RSDs values or =0.998. Recovery values were in the range of 45-106%, showing satisfactory robustness of the method for analyzing pyrethroids in water samples. The proposed methodology was applied for the analysis of river water samples. Cypermethrin was detected at concentration levels ranging from 4.94 to 30.5 ng L(-1). Copyright 2010 Elsevier B.V. All rights reserved.
A combined analytic-numeric approach for some boundary-value problems
Directory of Open Access Journals (Sweden)
Mustafa Turkyilmazoglu
2016-02-01
Full Text Available A combined analytic-numeric approach is undertaken in the present work for the solution of boundary-value problems in the finite or semi-infinite domains. Equations to be treated arise specifically from the boundary layer analysis of some two and three-dimensional flows in fluid mechanics. The purpose is to find quick but accurate enough solutions. Taylor expansions at either boundary conditions are computed which are next matched to the other asymptotic or exact boundary conditions. The technique is applied to the well-known Blasius as well as Karman flows. Solutions obtained in terms of series compare favorably with the existing ones in the literature.
International Nuclear Information System (INIS)
Murthy, K.P.N.; Indira, R.
1986-01-01
An analytical formulation is presented for calculating the mean and variance of transmission for a model deep-penetration problem. With this formulation, the variance reduction characteristics of two biased Monte Carlo schemes are studied. The first is the usual exponential biasing wherein it is shown that the optimal biasing parameter depends sensitively on the scattering properties of the shielding medium. The second is a scheme that couples exponential biasing to the scattering angle biasing proposed recently. It is demonstrated that the coupled scheme performs better than exponential biasing
Penkov, V. B.; Levina, L. V.; Novikova, O. S.; Shulmin, A. S.
2018-03-01
Herein we propose a methodology for structuring a full parametric analytical solution to problems featuring elastostatic media based on state-of-the-art computing facilities that support computerized algebra. The methodology includes: direct and reverse application of P-Theorem; methods of accounting for physical properties of media; accounting for variable geometrical parameters of bodies, parameters of boundary states, independent parameters of volume forces, and remote stress factors. An efficient tool to address the task is the sustainable method of boundary states originally designed for the purposes of computerized algebra and based on the isomorphism of Hilbertian spaces of internal states and boundary states of bodies. We performed full parametric solutions of basic problems featuring a ball with a nonconcentric spherical cavity, a ball with a near-surface flaw, and an unlimited medium with two spherical cavities.
Cuadrado-Cenzual, M A; García Briñón, M; de Gracia Hills, Y; González Estecha, M; Collado Yurrita, L; de Pedro Moro, J A; Fernández Pérez, C; Arroyo Fernández, M
2015-01-01
Patient identification errors and biological samples are one of the problems with the highest risk factor in causing an adverse event in the patient. To detect and analyse the causes of patient identification errors in analytical requests (PIEAR) from emergency departments, and to develop improvement strategies. A process and protocol was designed, to be followed by all professionals involved in the requesting and performing of laboratory tests. Evaluation and monitoring indicators of PIEAR were determined, before and after the implementation of these improvement measures (years 2010-2014). A total of 316 PIEAR were detected in a total of 483,254 emergency service requests during the study period, representing a mean of 6.80/10,000 requests. Patient identification failure was the most frequent in all the 6-monthly periods assessed, with a significant difference (Perrors. However, we must continue working with this strategy, promoting a culture of safety for all the professionals involved, and trying to achieve the goal that 100% of the analytical and samples are properly identified. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.
An analytical examination of distortions in power spectra due to sampling errors
International Nuclear Information System (INIS)
Njau, E.C.
1982-06-01
Distortions introduced into spectral energy densities of sinusoid signals as well as those of more complex signals through different forms of errors in signal sampling are developed and shown analytically. The approach we have adopted in doing this involves, firstly, developing for each type of signal and for the corresponding form of sampling errors an analytical expression that gives the faulty digitization process involved in terms of the features of the particular signal. Secondly, we take advantage of a method described elsewhere [IC/82/44] to relate, as much as possible, the true spectral energy density of the signal and the corresponding spectral energy density of the faulty digitization process. Thirdly, we then develop expressions which reveal the distortions that are formed in the directly computed spectral energy density of the digitized signal. It is evident from the formulations developed herein that the types of sampling errors taken into consideration may create false peaks and other distortions that are of non-negligible concern in computed power spectra. (author)
Energy Technology Data Exchange (ETDEWEB)
Salminen, S.
2009-07-01
In this work, separation methods have been developed for the analysis of anthropogenic transuranium elements plutonium, americium, curium and neptunium from environmental samples contaminated by global nuclear weapons testing and the Chernobyl accident. The analytical methods utilized in this study are based on extraction chromatography. Highly varying atmospheric plutonium isotope concentrations and activity ratios were found at both Kurchatov (Kazakhstan), near the former Semipalatinsk test site, and Sodankylae (Finland). The origin of plutonium is almost impossible to identify at Kurchatov, since hundreds of nuclear tests were performed at the Semipalatinsk test site. In Sodankylae, plutonium in the surface air originated from nuclear weapons testing, conducted mostly by USSR and USA before the sampling year 1963. The variation in americium, curium and neptunium concentrations was great as well in peat samples collected in southern and central Finland in 1986 immediately after the Chernobyl accident. The main source of transuranium contamination in peats was from global nuclear test fallout, although there are wide regional differences in the fraction of Chernobyl-originated activity (of the total activity) for americium, curium and neptunium. The separation methods developed in this study yielded good chemical recovery for the elements investigated and adequately pure fractions for radiometric activity determination. The extraction chromatographic methods were faster compared to older methods based on ion exchange chromatography. In addition, extraction chromatography is a more environmentally friendly separation method than ion exchange, because less acidic waste solutions are produced during the analytical procedures. (orig.)
Analytic solution of the relativistic Coulomb problem for a spinless Salpeter equation
International Nuclear Information System (INIS)
Durand, B.; Durand, L.
1983-01-01
We construct an analytic solution to the spinless S-wave Salpeter equation for two quarks interacting via a Coulomb potential, [2(-del 2 +m 2 )/sup 1/2/-M-α/r] psi(r) = 0, by transforming the momentum-space form of the equation into a mapping or boundary-value problem for analytic functions. The principal part of the three-dimensional wave function is identical to the solution of a one-dimensional Salpeter equation found by one of us and discussed here. The remainder of the wave function can be constructed by the iterative solution of an inhomogeneous singular integral equation. We show that the exact bound-state eigenvalues for the Coulomb problem are M/sub n/ = 2m/(1+α 2 /4n 2 )/sup 1/2/, n = 1,2,..., and that the wave function for the static interaction diverges for r→0 as C(mr)/sup -nu/, where #betta# = (α/π)(1+α/π+...) is known exactly
An analytical solution to the heat transfer problem in thick-walled hunt flow
International Nuclear Information System (INIS)
Bluck, Michael J; Wolfendale, Michael J
2017-01-01
Highlights: • Convective heat transfer in Hunt type flow of a liquid metal in a rectangular duct. • Analytical solution to the H1 constant peripheral temperature in a rectangular duct. • New H1 result demonstrating the enhancement of heat transfer due to flow distortion by the applied magnetic field. • Analytical solution to the H2 constant peripheral heat flux in a rectangular duct. • New H2 result demonstrating the reduction of heat transfer due to flow distortion by the applied magnetic field. • Results are important for validation of CFD in magnetohydrodynamics and for implementation of systems code approaches. - Abstract: The flow of a liquid metal in a rectangular duct, subject to a strong transverse magnetic field is of interest in a number of applications. An important application of such flows is in the context of coolants in fusion reactors, where heat is transferred to a lead-lithium eutectic. It is vital, therefore, that the heat transfer mechanisms are understood. Forced convection heat transfer is strongly dependent on the flow profile. In the hydrodynamic case, Nusselt numbers and the like, have long been well characterised in duct geometries. In the case of liquid metals in strong magnetic fields (magnetohydrodynamics), the flow profiles are very different and one can expect a concomitant effect on convective heat transfer. For fully developed laminar flows, the magnetohydrodynamic problem can be characterised in terms of two coupled partial differential equations. The problem of heat transfer for perfectly electrically insulating boundaries (Shercliff case) has been studied previously (Bluck et al., 2015). In this paper, we demonstrate corresponding analytical solutions for the case of conducting hartmann walls of arbitrary thickness. The flow is very different from the Shercliff case, exhibiting jets near the side walls and core flow suppression which have profound effects on heat transfer.
International Nuclear Information System (INIS)
DOUGLAS JG; MEZNARICH HD, PHD; OLSEN JR; ROSS GA; STAUFFER M
2008-01-01
effectively remove inorganic chloride from the activated carbon adsorption tubes. With the TOX sample preparation equipment and TOX analyzers at WSCF, the nitrate wash recommended by EPA SW-846 method 9020B was found to be inadequate to remove inorganic chloride interference. Increasing the nitrate wash concentration from 10 grams per liter (g/L) to 100 g/L potassium nitrate and increasing the nitrate wash volume from 3 milliliters (mL) to 10 mL effectively removed the inorganic chloride up to at least 100 ppm chloride in the sample matrix. Excessive purging of the adsorption tubes during sample preparation was eliminated. These changes in sample preparation have been incorporated in the analytical procedure. The results using the revised sample preparation procedure show better agreement of TOX values both for replicate analyses of single samples and for the analysis of replicate samples acquired from the same groundwater well. Furthermore, less apparent column breakthrough now occurs with the revised procedure. One additional modification made to sample preparation was to discontinue the treatment of groundwater samples with sodium bisulfite. Sodium bisulfite is used to remove inorganic chlorine from the sample; inorganic chlorine is not expected to be a constituent in these groundwater samples. Several other factors were also investigated as possible sources of anomalous TOX results: (1) Instrument instability: examination of the history of results for TOX laboratory control samples and initial calibration verification standards indicate good long-term precision for the method and instrument. Determination of a method detection limit of 2.3 ppb in a deionized water matrix indicates the method and instrumentation have good stability and repeatability. (2) Non-linear instrument response: the instrument is shown to have good linear response from zero to 200 parts per billion (ppb) TOX. This concentration range encompasses the majority of samples received at WSCF for TOX
Bobaly, Balazs; D'Atri, Valentina; Goyon, Alexandre; Colas, Olivier; Beck, Alain; Fekete, Szabolcs; Guillarme, Davy
2017-08-15
The analytical characterization of therapeutic monoclonal antibodies and related proteins usually incorporates various sample preparation methodologies. Indeed, quantitative and qualitative information can be enhanced by simplifying the sample, thanks to the removal of sources of heterogeneity (e.g. N-glycans) and/or by decreasing the molecular size of the tested protein by enzymatic or chemical fragmentation. These approaches make the sample more suitable for chromatographic and mass spectrometric analysis. Structural elucidation and quality control (QC) analysis of biopharmaceutics are usually performed at intact, subunit and peptide levels. In this paper, general sample preparation approaches used to attain peptide, subunit and glycan level analysis are overviewed. Protocols are described to perform tryptic proteolysis, IdeS and papain digestion, reduction as well as deglycosylation by PNGase F and EndoS2 enzymes. Both historical and modern sample preparation methods were compared and evaluated using rituximab and trastuzumab, two reference therapeutic mAb products approved by Food and Drug Administration (FDA) and European Medicines Agency (EMA). The described protocols may help analysts to develop sample preparation methods in the field of therapeutic protein analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Marine anthropogenic radiotracers in the Southern Hemisphere: New sampling and analytical strategies
Levy, I.; Povinec, P. P.; Aoyama, M.; Hirose, K.; Sanchez-Cabeza, J. A.; Comanducci, J.-F.; Gastaud, J.; Eriksson, M.; Hamajima, Y.; Kim, C. S.; Komura, K.; Osvath, I.; Roos, P.; Yim, S. A.
2011-04-01
The Japan Agency for Marine Earth Science and Technology conducted in 2003-2004 the Blue Earth Global Expedition (BEAGLE2003) around the Southern Hemisphere Oceans, which was a rare opportunity to collect many seawater samples for anthropogenic radionuclide studies. We describe here sampling and analytical methodologies based on radiochemical separations of Cs and Pu from seawater, as well as radiometric and mass spectrometry measurements. Several laboratories took part in radionuclide analyses using different techniques. The intercomparison exercises and analyses of certified reference materials showed a reasonable agreement between the participating laboratories. The obtained data on the distribution of 137Cs and plutonium isotopes in seawater represent the most comprehensive results available for the Southern Hemisphere Oceans.
Rapi, Stefano; Berardi, Margherita; Cellai, Filippo; Ciattini, Samuele; Chelazzi, Laura; Ognibene, Agostino; Rubeca, Tiziana
2017-07-24
Information on preanalytical variability is mandatory to bring laboratories up to ISO 15189 requirements. Fecal sampling is greatly affected by lack of harmonization in laboratory medicine. The aims of this study were to obtain information on the devices used for fecal sampling and to explore the effect of different amounts of feces on the results from the fecal immunochemical test for hemoglobin (FIT-Hb). Four commercial sample collection devices for quantitative FIT-Hb measurements were investigated. The volume of interest (VOI) of the probes was measured from diameter and geometry. Quantitative measurements of the mass of feces were carried out by gravimetry. The effects of an increased amount of feces on the analytical environment were investigated measuring the Hb values with a single analytical method. VOI was 8.22, 7.1 and 9.44 mm3 for probes that collected a target of 10 mg of feces, and 3.08 mm3 for one probe that targeted 2 mg of feces. The ratio between recovered and target amounts of devices ranged from 56% to 121%. Different changes in the measured Hb values were observed, in adding increasing amounts of feces in commercial buffers. The amounts of collected materials are related to the design of probes. Three out 4 manufacturers declare the same target amount using different sampling volumes and obtaining different amounts of collected materials. The introduction of a standard probes to reduce preanalytical variability could be an useful step for fecal test harmonization and to fulfill the ISO 15189 requirements.
Application of the invariant embedding method to analytically solvable transport problems
Energy Technology Data Exchange (ETDEWEB)
Wahlberg, Malin
2005-05-01
The applicability and performance of the invariant embedding method for calculating various transport quantities is investigated in this thesis. The invariant embedding method is a technique to calculate the reflected or transmitted fluxes in homogeneous half-spaces and slabs, without the need for solving for the flux inside the medium. In return, the embedding equations become non-linear, and in practical cases they need to be solved by numerical methods. There are, however, fast and effective iterative methods available for this purpose. The objective of this thesis is to investigate the performance of these iterative methods in model problems, in which also an exact analytical solution can be obtained. Some of these analytical solutions are also new, hence their derivation constitutes a part of the thesis work. The cases investigated in the thesis all concern the calculation of reflected fluxes from half-spaces. The first problem treated was the calculation of the energy spectrum of reflected (sputtered) particles from a multiplying medium, where the multiplication arises from recoil production (i.e. like binary fission), when bombarded by o flux of monoenergetic particles of the same type. Both constant cross sections and energy dependent cross sections with a power law dependence were used in the calculations. The second class of problems concerned the calculation of the path length distribution of reflected particles from a medium without multiplication. It is an interesting new observation that the distribution of the path length travelled in the medium before reflection can be calculated with invariant embedding methods, which actually do not solve the flux distribution in the medium. We have tested the accuracy and the convergence properties of the embedding method also for this case. Finally, very recently a theory of connecting the infinite and half-space medium solutions by embedding-like integral equations was developed and reported in the literature
Application of the invariant embedding method to analytically solvable transport problems
International Nuclear Information System (INIS)
Wahlberg, Malin
2005-05-01
The applicability and performance of the invariant embedding method for calculating various transport quantities is investigated in this thesis. The invariant embedding method is a technique to calculate the reflected or transmitted fluxes in homogeneous half-spaces and slabs, without the need for solving for the flux inside the medium. In return, the embedding equations become non-linear, and in practical cases they need to be solved by numerical methods. There are, however, fast and effective iterative methods available for this purpose. The objective of this thesis is to investigate the performance of these iterative methods in model problems, in which also an exact analytical solution can be obtained. Some of these analytical solutions are also new, hence their derivation constitutes a part of the thesis work. The cases investigated in the thesis all concern the calculation of reflected fluxes from half-spaces. The first problem treated was the calculation of the energy spectrum of reflected (sputtered) particles from a multiplying medium, where the multiplication arises from recoil production (i.e. like binary fission), when bombarded by o flux of monoenergetic particles of the same type. Both constant cross sections and energy dependent cross sections with a power law dependence were used in the calculations. The second class of problems concerned the calculation of the path length distribution of reflected particles from a medium without multiplication. It is an interesting new observation that the distribution of the path length travelled in the medium before reflection can be calculated with invariant embedding methods, which actually do not solve the flux distribution in the medium. We have tested the accuracy and the convergence properties of the embedding method also for this case. Finally, very recently a theory of connecting the infinite and half-space medium solutions by embedding-like integral equations was developed and reported in the literature
International Nuclear Information System (INIS)
Shin, Y.W.; Wiedermann, A.H.
1984-02-01
A method was published, based on the integral method of characteristics, by which the junction and boundary conditions needed in computation of a flow in a piping network can be accurately formulated. The method for the junction and boundary conditions formulation together with the two-step Lax-Wendroff scheme are used in a computer program; the program in turn, is used here in calculating sample problems related to the blowdown transient of a two-phase flow in the piping network downstream of a PWR pressurizer. Independent, nearly exact analytical solutions also are obtained for the sample problems. Comparison of the results obtained by the hybrid numerical technique with the analytical solutions showed generally good agreement. The good numerical accuracy shown by the results of our scheme suggest that the hybrid numerical technique is suitable for both benchmark and design calculations of PWR pressurizer blowdown transients
International Nuclear Information System (INIS)
Bachmann, Udo; Biederbick, Walter; Derakshani, Nahid
2010-01-01
The recommendation for sampling for prevention of hazards in civil defense is describing the analytics of chemical, biological and radioactive contaminations and includes detail information on the sampling, protocol preparation and documentation procedures. The volume includes a separate brief instruction for the CBRN (chemical, biological, radioactive, nuclear) sampling.
An analytic solution of the static problem of inclined risers conveying fluid
Alfosail, Feras
2016-05-28
We use the method of matched asymptotic expansion to develop an analytic solution to the static problem of clamped–clamped inclined risers conveying fluid. The inclined riser is modeled as an Euler–Bernoulli beam taking into account its self-weight, mid-plane stretching, an applied axial tension, and the internal fluid velocity. The solution consists of three parts: an outer solution valid away from the two boundaries and two inner solutions valid near the two ends. The three solutions are then matched and combined into a so-called composite expansion. A Newton–Raphson method is used to determine the value of the mid-plane stretching corresponding to each applied tension and internal velocity. The analytic solution is in good agreement with those obtained with other solution methods for large values of applied tensions. Therefore, it can be used to replace other mathematical solution methods that suffer numerical limitations and high computational cost. © 2016 Springer Science+Business Media Dordrecht
An analytical approach to estimate the number of small scatterers in 2D inverse scattering problems
International Nuclear Information System (INIS)
Fazli, Roohallah; Nakhkash, Mansor
2012-01-01
This paper presents an analytical method to estimate the location and number of actual small targets in 2D inverse scattering problems. This method is motivated from the exact maximum likelihood estimation of signal parameters in white Gaussian noise for the linear data model. In the first stage, the method uses the MUSIC algorithm to acquire all possible target locations and in the next stage, it employs an analytical formula that works as a spatial filter to determine which target locations are associated to the actual ones. The ability of the method is examined for both the Born and multiple scattering cases and for the cases of well-resolved and non-resolved targets. Many numerical simulations using both the coincident and non-coincident arrays demonstrate that the proposed method can detect the number of actual targets even in the case of very noisy data and when the targets are closely located. Using the experimental microwave data sets, we further show that this method is successful in specifying the number of small inclusions. (paper)
Analytic structure and power series expansion of the Jost function for the two-dimensional problem
International Nuclear Information System (INIS)
Rakityansky, S A; Elander, N
2012-01-01
For a two-dimensional quantum-mechanical problem, we obtain a generalized power series expansion of the S-matrix that can be done near an arbitrary point on the Riemann surface of the energy, similar to the standard effective-range expansion. In order to do this, we consider the Jost function and analytically factorize its momentum dependence that causes the Jost function to be a multi-valued function. The remaining single-valued function of the energy is then expanded in the power series near an arbitrary point in the complex energy plane. A systematic and accurate procedure has been developed for calculating the expansion coefficients. This makes it possible to obtain a semi-analytic expression for the Jost function (and therefore for the S-matrix) near an arbitrary point on the Riemann surface and use it, for example, to locate the spectral points (bound and resonant states) as the S-matrix poles. The method is applied to a model similar to those used in the theory of quantum dots. (paper)
Exact Analytical Solutions in Three-Body Problems and Model of Neutrino Generator
Directory of Open Access Journals (Sweden)
Takibayev N.Zh.
2010-04-01
Full Text Available Exact analytic solutions are obtained in three-body problem for the scattering of light particle on the subsystem of two ﬁxed centers in the case when pair potentials have a separable form. Solutions show an appearance of new resonance states and dependence of resonance energy and width on distance between two ﬁxed centers. The approach of exact analytical solutions is expanded to the cases when two-body scattering amplitudes have the Breit-Wigner’s form and employed for description of neutron resonance scattering on subsystem of two heavy nuclei ﬁxed in nodes of crystalline lattice. It is shown that some resonance states have widths close to zero at the certain values of distance between two heavy scatterer centers, this gives the possibility of transitions between states. One of these transitions between three-body resonance states could be connected with process of electron capture by proton with formation of neutron and emission of neutrino. This exoenergic process leading to the cooling of star without nuclear reactions is discussed.
Graphics for the multivariate two-sample problem
International Nuclear Information System (INIS)
Friedman, J.H.; Rafsky, L.C.
1981-01-01
Some graphical methods for comparing multivariate samples are presented. These methods are based on minimal spanning tree techniques developed for multivariate two-sample tests. The utility of these methods is illustrated through examples using both real and artificial data
International Nuclear Information System (INIS)
Parkerton, T.F.; Stone, M.A.
1995-01-01
Hydrocarbons generally elicit toxicity via a nonpolar narcotic mechanism. Recent research suggests that chemicals acting by this mode invoke ecotoxicity when the molar concentration in organisms lipid exceeds a critical threshold. Since ecotoxicity of nonpolar narcotic mixtures appears to be additive, the ecotoxicity of hydrocarbon mixtures thus depends upon: (1) the partitioning of individual hydrocarbons comprising the mixture from the environment to lipids and (2) the total molar sum of the constituent hydrocarbons in lipids. These insights have led previous investigators to advance the concept of biomimetic extraction as a novel tool for assessing potential narcosis-type or baseline ecotoxicity in aqueous samples. Drawing from this earlier work, the authors have developed a method to quantify Bioavailable Petroleum Hydrocarbons (BPHS) in hydrocarbon-contaminated aqueous and soil/sediment samples. A sample is equilibrated with a solid phase microextraction (SPME) fiber that serves as a surrogate for organism lipids. The total moles of hydrocarbons that partition to the SPME fiber is then quantified using a simple GC/FID procedure. Research conducted to support the development and initial validation of this method will be presented. Results suggest that BPH analyses provide a promising, cost-effective approach for predicting the ecotoxicity of environmental samples contaminated with hydrocarbon mixtures. Consequently, BPH analyses may provide a valuable analytical screening tool for ecotoxicity assessment in product and effluent testing, environmental monitoring and site remediation applications
Directory of Open Access Journals (Sweden)
Omar Abu Arqub
2014-01-01
Full Text Available The purpose of this paper is to present a new kind of analytical method, the so-called residual power series, to predict and represent the multiplicity of solutions to nonlinear boundary value problems of fractional order. The present method is capable of calculating all branches of solutions simultaneously, even if these multiple solutions are very close and thus rather difficult to distinguish even by numerical techniques. To verify the computational efficiency of the designed proposed technique, two nonlinear models are performed, one of them arises in mixed convection flows and the other one arises in heat transfer, which both admit multiple solutions. The results reveal that the method is very effective, straightforward, and powerful for formulating these multiple solutions.
A finite volume method for cylindrical heat conduction problems based on local analytical solution
Li, Wang
2012-10-01
A new finite volume method for cylindrical heat conduction problems based on local analytical solution is proposed in this paper with detailed derivation. The calculation results of this new method are compared with the traditional second-order finite volume method. The newly proposed method is more accurate than conventional ones, even though the discretized expression of this proposed method is slightly more complex than the second-order central finite volume method, making it cost more calculation time on the same grids. Numerical result shows that the total CPU time of the new method is significantly less than conventional methods for achieving the same level of accuracy. © 2012 Elsevier Ltd. All rights reserved.
Analytic simulation of the Poincare surface of sections for the diamagnetic Kepler problem
International Nuclear Information System (INIS)
Hasegawa, H.; Harada, A.; Okazaki, Y.
1984-01-01
The Poincare surface-of-section analysis which the authors previously reported on the diamagnetic Kepler problem (classical hydrogen atom in a uniform magnetic field) in a transition region from regular to chaotic motions is simulated by an analytic means, by taking intersections of the energy integral and the approximate integral Λ of Solovev to obtain sections of the two separate regions of the motion that exist in the limit of a weak magnetic field (B → 0). The origin of the unique hyperbolic point and the separatrix around which the onset of chaos takes place are thus identified. The invariant tori arising near the full chaos are shown to be simulated by this method but with modified parameter values in the expression Λ. (author)
Three-dimensional transport theory: An analytical solution of an internal beam searchlight problem-I
International Nuclear Information System (INIS)
Williams, M.M.R.
2009-01-01
We describe a number of methods for obtaining analytical solutions and numerical results for three-dimensional one-speed neutron transport problems in a half-space containing a variety of source shapes which emit neutrons mono-directionally. For example, we consider an off-centre point source, a ring source and a disk source, or any combination of these, and calculate the surface scalar flux as a function of the radial and angular co-ordinates. Fourier transforms in the transverse directions are used and a Laplace transform in the axial direction. This enables the Wiener-Hopf method to be employed, followed by an inverse Fourier-Hankel transform. Some additional transformations are introduced which enable the inverse Hankel transforms involving Bessel functions to be evaluated numerically more efficiently. A hybrid diffusion theory method is also described which is shown to be a useful guide to the general behaviour of the solutions of the transport equation.
Analytic solutions to a family of boundary-value problems for Ginsburg-Landau type equations
Vassilev, V. M.; Dantchev, D. M.; Djondjorov, P. A.
2017-10-01
We consider a two-parameter family of nonlinear ordinary differential equations describing the behavior of a critical thermodynamic system, e.g., a binary liquid mixture, of film geometry within the framework of the Ginzburg-Landau theory by means of the order-parameter. We focus on the case in which the confining surfaces are strongly adsorbing but prefer different components of the mixture, i.e., the order-parameter tends to infinity at one of the boundaries and to minus infinity at the other one. We assume that the boundaries of the system are positioned at a finite distance from each other and give analytic solutions to the corresponding boundary-value problems in terms of Weierstrass and Jacobi elliptic functions.
Analytic simulation of the Poincare surface of sections for the diamagnetic Kepler problem
Energy Technology Data Exchange (ETDEWEB)
Hasegawa, H; Harada, A; Okazaki, Y [Kyoto Univ. (Japan). Dept. of Physics
1984-11-11
The Poincare surface-of-section analysis which the authors previously reported on the diamagnetic Kepler problem (classical hydrogen atom in a uniform magnetic field) in a transition region from regular to chaotic motions is simulated by an analytic means, by taking intersections of the energy integral and the approximate integral ..lambda.. of Solovev to obtain sections of the two separate regions of the motion that exist in the limit of a weak magnetic field (B ..-->.. 0). The origin of the unique hyperbolic point and the separatrix around which the onset of chaos takes place are thus identified. The invariant tori arising near the full chaos are shown to be simulated by this method but with modified parameter values in the expression ..lambda...
A finite volume method for cylindrical heat conduction problems based on local analytical solution
Li, Wang; Yu, Bo; Wang, Xinran; Wang, Peng; Sun, Shuyu
2012-01-01
A new finite volume method for cylindrical heat conduction problems based on local analytical solution is proposed in this paper with detailed derivation. The calculation results of this new method are compared with the traditional second-order finite volume method. The newly proposed method is more accurate than conventional ones, even though the discretized expression of this proposed method is slightly more complex than the second-order central finite volume method, making it cost more calculation time on the same grids. Numerical result shows that the total CPU time of the new method is significantly less than conventional methods for achieving the same level of accuracy. © 2012 Elsevier Ltd. All rights reserved.
Analytic Bayesian solution of the two-stage poisson-type problem in probabilistic risk analysis
International Nuclear Information System (INIS)
Frohner, F.H.
1985-01-01
The basic purpose of probabilistic risk analysis is to make inferences about the probabilities of various postulated events, with an account of all relevant information such as prior knowledge and operating experience with the specific system under study, as well as experience with other similar systems. Estimation of the failure rate of a Poisson-type system leads to an especially simple Bayesian solution in closed form if the prior probabilty implied by the invariance properties of the problem is properly taken into account. This basic simplicity persists if a more realistic prior, representing order of magnitude knowledge of the rate parameter, is employed instead. Moreover, the more realistic prior allows direct incorporation of experience gained from other similar systems, without need to postulate a statistical model for an underlying ensemble. The analytic formalism is applied to actual nuclear reactor data
Some analytical aspects about determination of Sr89 and Sr90 in environmental samples
International Nuclear Information System (INIS)
Gasco, C.; Alvarez Garcia, A.
1988-01-01
Some problems about determination of Sr 89 and Sr 90 in environmental samples have been studied. The main difficulties are due to the wide range in the concentration of their components and the contents of chemical and radiochemical interferent elements. The behaviour of strontium on ion exchange resin has been described by some experiments in various media: aqueous media, calcium concentration and matrix variable. The differences of alkaline-earth nitrate and carbonate solubilities have been analyzed in nitric acid. The chemical recovery in environmental samples has been determined. (Author)
Micro-Crater Laser Induced Breakdown Spectroscopy--an Analytical approach in metals samples
Energy Technology Data Exchange (ETDEWEB)
Piscitelli, Vincent [UCV- Laboratorio de Espectroscopia Laser, Caracas (Venezuela); Lawrence Berkeley National laboratory, Berkeley, US (United States); Gonzalez, Jhanis; Xianglei, Mao; Russo, Richard [Lawrence Berkeley National laboratory, Berkeley, US (United States); Fernandez, Alberto [UCV- Laboratorio de Espectroscopia Laser, Caracas (Venezuela)
2008-04-15
The laser ablation has been increasing its popularity like as technique of chemical analysis. This is due to its great potentiality in the analysis of solid samples. On the way to contributing to the development of the technique, we in this work studied the laser induced breakdown spectroscopy (LIBS) in conditions of micro ablation for future studies of coverings and micro crates analysis. Craters between 2 and 7 micrometers of diameter were made using an Nd-YAG nanosecond laser in their fundamental emission of 1064 nm. In order to create these craters we use an objective lens of long distance work and 0.45 of numerical aperture. The atomic emission versus the energy of the laser and its effect on the size of craters was study. We found that below 3 micrometers although there was evidence of material removal by the formation of a crater, it was no detectable atomic emission for our instruments. In order to try to understand this, curves of size of crater versus plasma temperature using the Boltzmann distribution graphs taking the Copper emission lines in the visible region were made. In addition calibration curves for Copper and aluminum were made in two different matrices; one of it was a Cu/Zn alloy and the other a Zinc Matrix. The atomic lines Cu I (521.78 nm) and Al I (396.15 nm) was used. From the Calibration curve the analytical limit of detection and other analytical parameters were obtained.
Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.
2012-12-01
One of the most well-known problems in the field of Microeconomics is the Firm's Cost-Minimization Problem. In this paper we establish the analytical expression for the cost function using the Cobb-Douglas model and considering maximum constraints for the inputs. Moreover we prove that it belongs to the class C1.
Sample problem manual for benchmarking of cask analysis codes
International Nuclear Information System (INIS)
Glass, R.E.
1988-02-01
A series of problems have been defined to evaluate structural and thermal codes. These problems were designed to simulate the hypothetical accident conditions given in Title 10 of the Code of Federal Regulation, Part 71 (10CFR71) while retaining simple geometries. This produced a problem set that exercises the ability of the codes to model pertinent physical phenomena without requiring extensive use of computer resources. The solutions that are presented are consensus solutions based on computer analyses done by both national laboratories and industry in the United States, United Kingdom, France, Italy, Sweden, and Japan. The intent of this manual is to provide code users with a set of standard structural and thermal problems and solutions which can be used to evaluate individual codes. 19 refs., 19 figs., 14 tabs
Dry sample storage system for an analytical laboratory supporting plutonium processing
International Nuclear Information System (INIS)
Treibs, H.A.; Hartenstein, S.D.; Griebenow, B.L.; Wade, M.A.
1990-01-01
The Special Isotope Separation (SIS) plant is designed to provide removal of undesirable isotopes in fuel grade plutonium by the atomic vapor laser isotope separation (AVLIS) process. The AVLIS process involves evaporation of plutonium metal, and passage of an intense beam of light from a laser through the plutonium vapor. The laser beam consists of several discrete wavelengths, tuned to the precise wavelength required to ionize the undesired isotopes. These ions are attracted to charged plates, leaving the bulk of the plutonium vapor enriched in the desired isotopes to be collected on a cold plate. Major portions of the process consist of pyrochemical processes, including direct reduction of the plutonium oxide feed material with calcium metal, and aqueous processes for purification of plutonium in residues. The analytical laboratory for the plant is called the Material and Process Control Laboratory (MPCL), and provides for the analysis of solid and liquid process samples
Energy Technology Data Exchange (ETDEWEB)
NONE
1992-12-01
The IAEA supports a number of projects having to do with the analysis of airborne particulate matter by nuclear techniques. Most of this work involves the use of activation analysis in its various forms, particularly instrumental neutron activation analysis (INAA). This technique has been widely used in many different countries for the analysis of airborne particulate matter, and there are already many publications in scientific journals, books and reports describing such work. The present document represents an attempt to summarize the most important features of INAA as applied to the analysis of airborne particulate matter. It is intended to serve as a set of guidelines for use by participants in the IAEA's own programmes, and other scientists, who are not yet fully experienced in the application of INAA to airborne particulate samples, and who wish either to make a start on using this technique or to improve their existing procedures. The methodologies for sampling described in this document are of rather general applicability, although they are presented here in a way that takes account of the particular requirements arising from the use of INAA as the analytical technique. The analytical part of the document, however, is presented in a form that is applicable only to INAA. (Subsequent publications in this series are expected to deal specifically with other nuclear related techniques such as energy dispersive X ray fluorescence (ED-XRF) and particle induced X ray emission (PIXE) analysis). Although the methods and procedures described here have been found through experience to yield acceptable results, they should not be considered mandatory. Any other procedure used should, however, be chosen to be capable of yielding results at least of equal quality to those described.
International Nuclear Information System (INIS)
1992-01-01
The IAEA supports a number of projects having to do with the analysis of airborne particulate matter by nuclear techniques. Most of this work involves the use of activation analysis in its various forms, particularly instrumental neutron activation analysis (INAA). This technique has been widely used in many different countries for the analysis of airborne particulate matter, and there are already many publications in scientific journals, books and reports describing such work. The present document represents an attempt to summarize the most important features of INAA as applied to the analysis of airborne particulate matter. It is intended to serve as a set of guidelines for use by participants in the IAEA's own programmes, and other scientists, who are not yet fully experienced in the application of INAA to airborne particulate samples, and who wish either to make a start on using this technique or to improve their existing procedures. The methodologies for sampling described in this document are of rather general applicability, although they are presented here in a way that takes account of the particular requirements arising from the use of INAA as the analytical technique. The analytical part of the document, however, is presented in a form that is applicable only to INAA. (Subsequent publications in this series are expected to deal specifically with other nuclear related techniques such as energy dispersive X ray fluorescence (ED-XRF) and particle induced X ray emission (PIXE) analysis). Although the methods and procedures described here have been found through experience to yield acceptable results, they should not be considered mandatory. Any other procedure used should, however, be chosen to be capable of yielding results at least of equal quality to those described
International Nuclear Information System (INIS)
1993-01-01
The present document represents an attempt to summarize the most important features of the different forms of ED-XFR as applied to the analysis of airborne particulate matter. It is intended to serve as a set of guidelines for use by participants in the IAEA's own programmes, and other scientists, who are not yet fully experienced in the application of ED-XRF to airborne particulate samples, and who wish either to make a start on using this technique or to improve their existing procedures. The methodologies for sampling described in this document are of rather general applicability. Emphasis is also placed on the sources of errors affecting the sampling of airborne particulate matter. The analytical part of the document describes the different forms of ED-XRF and their potential applications. Spectrum evaluation, a key step in X-ray spectrometry, is covered in depth, including discussion on several calibration and peak fitting techniques and computer programs especially designed for this purpose. 148 refs, 25 figs, 13 tabs
Valent, Tullio
1988-01-01
In this book I present, in a systematic form, some local theorems on existence, uniqueness, and analytic dependence on the load, which I have recently obtained for some types of boundary value problems of finite elasticity. Actually, these results concern an n-dimensional (n ~ 1) formal generalization of three-dimensional elasticity. Such a generalization, be sides being quite spontaneous, allows us to consider a great many inter esting mathematical situations, and sometimes allows us to clarify certain aspects of the three-dimensional case. Part of the matter presented is unpublished; other arguments have been only partially published and in lesser generality. Note that I concentrate on simultaneous local existence and uniqueness; thus, I do not deal with the more general theory of exis tence. Moreover, I restrict my discussion to compressible elastic bodies and I do not treat unilateral problems. The clever use of the inverse function theorem in finite elasticity made by STOPPELLI [1954, 1957a, 1957b]...
Benchmarking the invariant embedding method against analytical solutions in model transport problems
International Nuclear Information System (INIS)
Malin, Wahlberg; Imre, Pazsit
2005-01-01
The purpose of this paper is to demonstrate the use of the invariant embedding method in a series of model transport problems, for which it is also possible to obtain an analytical solution. Due to the non-linear character of the embedding equations, their solution can only be obtained numerically. However, this can be done via a robust and effective iteration scheme. In return, the domain of applicability is far wider than the model problems investigated in this paper. The use of the invariant embedding method is demonstrated in three different areas. The first is the calculation of the energy spectrum of reflected (sputtered) particles from a multiplying medium, where the multiplication arises from recoil production. Both constant and energy dependent cross sections with a power law dependence were used in the calculations. The second application concerns the calculation of the path length distribution of reflected particles from a medium without multiplication. This is a relatively novel and unexpected application, since the embedding equations do not resolve the depth variable. The third application concerns the demonstration that solutions in an infinite medium and a half-space are interrelated through embedding-like integral equations, by the solution of which the reflected flux from a half-space can be reconstructed from solutions in an infinite medium or vice versa. In all cases the invariant embedding method proved to be robust, fast and monotonically converging to the exact solutions. (authors)
Analytical solution to the circularity problem in the discounted cash flow valuation framework
Directory of Open Access Journals (Sweden)
Felipe Mejía-Peláez
2011-12-01
Full Text Available In this paper we propose an analytical solution to the circularity problem between value and cost of capital. Our solution is derived starting from a central principle of finance that relates value today to value, cash flow, and the discount rate for next period. We present a general formulation without circularity for the equity value (E, cost of levered equity (Ke, levered firm value (V, and the weighted average cost of capital (WACC. We furthermore compare the results obtained from these formulas with the results of the application of the Adjusted Present Value approach (no circularity and the iterative solution of circularity based upon the iteration feature of a spreadsheet, concluding that all methods yield exactly the same answer. The advantage of this solution is that it avoids problems such as using manual methods (i.e., the popular “Rolling WACC” ignoring the circularity issue, setting a target leverage (usually constant with the inconsistencies that result from it, the wrong use of book values, or attributing the discrepancies in values to rounding errors.
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity
International Nuclear Information System (INIS)
Douglas, J.G.; Meznarich, H.K.; Olsen, J.R.; Ross, G.A.; Stauffer, M.
2009-01-01
remove inorganic chloride from the activated-carbon adsorption tubes. With the TOX sample preparation equipment and TOX analyzers at WSCF, the nitrate wash recommended by EPA SW-846 method 9020B was found to be inadequate to remove inorganic chloride interference. Increasing the nitrate wash concentration from 10 grams per liter (g/L) to 100 giL potassium nitrate and increasing the nitrate wash volume from 3 milliliters (mL) to 10 mL effectively removed the inorganic chloride up to at least 100 ppm chloride in the sample matrix. Excessive purging of the adsorption tubes during sample preparation was eliminated. These changes in sample preparation have been incorporated in the analytical procedure. The results using the revised sample preparation procedure show better agreement of TOX values both for replicate analyses of single samples and for the analysis of replicate samples acquired from the same groundwater well. Furthermore, less apparent adsorption tube breakthrough now occurs with the revised procedure. One additional modification made to sample preparation was to discontinue the treatment of groundwater samples with sodium bisulfite. Sodium bisulfite is used to remove inorganic chlorine from the sample; inorganic chlorine is not expected to be a constituent in these groundwater samples. Several other factors were also investigated as possible sources of anomalous TOX results: (1) Instrument instability: examination of the history of results for TOX laboratory control samples and initial calibration verification standards indicate good long-term precision for the method and instrument. Determination of a method detection limit of 2.3 ppb in a deionized water matrix indicates the method and instrumentation have good stability and repeatability. (2) Non-linear instrument response: the instrument is shown to have good linear response from zero to 200 parts per billion (ppb) TOX. This concentration range encompasses the majority of samples received at WSCF for TOX
Analytical calculations of neutron slowing down and transport in the constant-cross-section problem
International Nuclear Information System (INIS)
Cacuci, D.G.
1978-01-01
Some aspects of the problem of neutron slowing down and transport in an infinite medium consisting of a single nuclide that scatters elastically and isotropically and has energy-independent cross sections were investigated. The method of singular eigenfunctions was applied to the Boltzmann equation governing the Laplace transform (with respect to the lethargy variable) of the neutron flux. A new sufficient condition for the convergence of the coefficients of the expansion of the scattering kernel in Legendre polynomials was rigorously derived for this energy-dependent problem. Formulas were obtained for the lethargy-dependent spatial moments of the scalar flux that are valid for medium to large lethargies. In deriving these formulas, use was made of the well-known connection between the spatial moments of the Laplace-transformed scalar flux and the moments of the flux in the ''eigenvalue space.'' The calculations were greatly aided by the construction of a closed general expression for these ''eigenvalue space'' moments. Extensive use was also made of the methods of combinatorial analysis and of computer evaluation, via FORMAC, of complicated sequences of manipulations. For the case of no absorption it was possible to obtain for materials of any atomic weight explicit corrections to the age-theory formulas for the spatial moments M/sub 2n/(u) of the scalar flux that are valid through terms of the order of u -5 . The evaluation of the coefficients of the powers of n, as explicit functions of the nuclear mass, is one of the end products of this investigation. In addition, an exact expression for the second spatial moment, M 2 (u), valid for arbitrary (constant) absorption, was derived. It is now possible to calculate analytically and rigorously the ''age'' for the constant-cross-section problem for arbitrary (constant) absorption and nuclear mass. 5 figures, 1 table
Analytical calculations of neutron slowing down and transport in the constant-cross-section problem
International Nuclear Information System (INIS)
Cacuci, D.G.
1978-04-01
Aspects of the problem of neutron slowing down and transport in an infinite medium consisting of a single nuclide that scatters elastically and isotropically and has energy-independent cross sections were investigated. The method of singular eigenfunctions was applied to the Boltzmann Equation governing the Laplace transform (with respect to the lethargy variable) of the neutron flux. A new sufficient condition for the convergence of the coefficients of the expansion of the scattering kernel in Legendre polynomials was rigorously derived for this energy-dependent problem. Formulas were obtained for the lethargy-dependent spatial moments of the scalar flux that are valid for medium to large lethargies. Use was made of the well-known connection between the spatial moments of the Laplace-transformed scalar flux and the moments of the flux in the ''eigenvalue space.'' The calculations were aided by the construction of a closed general expression for these ''eigenvalue space'' moments. Extensive use was also made of the methods of combinatorial analysis and of computer evaluation of complicated sequences of manipulations. For the case of no absorption it was possible to obtain for materials of any atomic weight explicit corrections to the age-theory formulas for the spatial moments M/sub 2n/(u) of the scalar flux that are valid through terms of the order of u -5 . The evaluation of the coefficients of the powers of n, as explicit functions of the nuclear mass, represent one of the end products of this investigation. In addition, an exact expression for the second spatial moment, M 2 (u), valid for arbitrary (constant) absorption, was derived. It is now possible to calculate analytically and rigorously the ''age'' for the constant-cross-section problem for arbitrary (constant) absorption and nuclear mass. 5 figures, 1 table
Problems of Aero-optics and Adaptive Optical Systems: Analytical Review
Directory of Open Access Journals (Sweden)
Yu. I. Shanin
2017-01-01
Full Text Available The analytical review gives the basic concepts of the aero-optics problem arising from the radiation propagation in the region of the boundary layers of a laser installation carrier aircraft. Estimates the radiation wave front distortions at its propagation in the near and far field. Presents main calculation approaches and methods to solve the gas-dynamic and optical problems in propagating laser radiation. Conducts a detailed analysis of the flows and their generating optical aberrations introduced by the aircraft turret (a projection platform of the on-board laser. Considers the effect of various factors (shock wave, difference in wall and flow temperatures on the flow pattern and the optical aberrations. Provides research data on the aero-optics obtained in the flying laboratory directly while in flight. Briefly considers the experimental research methods, diagnostic equipment, and synthesis of results while studying the aero-optics problem. Discusses some methods for mitigating the aerodynamic effects on the light propagation under flight conditions. Presents data about the passive, active, and hybrid effects on the flow in the boundary layers in order to reduce aberrations through improving the flow aerodynamics.The paper considers operation of adaptive optical systems under conditions of aero-optical distortions. Presents the study results concerning the reduction of the aero-optics effect on the characteristics of radiation in far field. Gives some research results regarding the effect on the efficiency of the adaptive system of a laser beam jitter and a time delay in the feedback signal transmission, which occur under application conditions. Provides data on adaptive correction of aero-optical wave fronts of radiation. Considers some application aspects in control systems of the on-board adaptive optics of adaptive filtration as a way to improve the efficiency of adaptive optical systems. The project in mind is to use obtained results
Analytical calculations of neutron slowing down and transport in the constant-cross-section problem
Energy Technology Data Exchange (ETDEWEB)
Cacuci, D.G.
1978-04-01
Aspects of the problem of neutron slowing down and transport in an infinite medium consisting of a single nuclide that scatters elastically and isotropically and has energy-independent cross sections were investigated. The method of singular eigenfunctions was applied to the Boltzmann Equation governing the Laplace transform (with respect to the lethargy variable) of the neutron flux. A new sufficient condition for the convergence of the coefficients of the expansion of the scattering kernel in Legendre polynomials was rigorously derived for this energy-dependent problem. Formulas were obtained for the lethargy-dependent spatial moments of the scalar flux that are valid for medium to large lethargies. Use was made of the well-known connection between the spatial moments of the Laplace-transformed scalar flux and the moments of the flux in the ''eigenvalue space.'' The calculations were aided by the construction of a closed general expression for these ''eigenvalue space'' moments. Extensive use was also made of the methods of combinatorial analysis and of computer evaluation of complicated sequences of manipulations. For the case of no absorption it was possible to obtain for materials of any atomic weight explicit corrections to the age-theory formulas for the spatial moments M/sub 2n/(u) of the scalar flux that are valid through terms of the order of u/sup -5/. The evaluation of the coefficients of the powers of n, as explicit functions of the nuclear mass, represent one of the end products of this investigation. In addition, an exact expression for the second spatial moment, M/sub 2/(u), valid for arbitrary (constant) absorption, was derived. It is now possible to calculate analytically and rigorously the ''age'' for the constant-cross-section problem for arbitrary (constant) absorption and nuclear mass. 5 figures, 1 table.
Communication Problems in Turner Syndrome: A Sample Survey.
Van Borsel, John; Dhooge, Inge; Verhoye, Kristof; Derde, Kristel; Curfs, Leopold
1999-01-01
A survey of 128 females (ages 2-58) with Turner syndrome found almost one quarter were receiving or had received treatment for stuttering, articulation problems, and/or delayed language development, with the latter two disorders being checked most frequently. Only 4 or the 68 individuals receiving growth hormone treatment reported voice changes.…
Hall, M B; Mertens, D R
2012-04-01
In vitro neutral detergent fiber (NDF) digestibility (NDFD) is an empirical measurement of fiber fermentability by rumen microbes. Variation is inherent in all assays and may be increased as multiple steps or differing procedures are used to assess an empirical measure. The main objective of this study was to evaluate variability within and among laboratories of 30-h NDFD values analyzed in repeated runs. Subsamples of alfalfa (n=4), corn forage (n=5), and grass (n=5) ground to pass a 6-mm screen passed a test for homogeneity. The 14 samples were sent to 10 laboratories on 3 occasions over 12 mo. Laboratories ground the samples and ran 1 to 3 replicates of each sample within fermentation run and analyzed 2 or 3 sets of samples. Laboratories used 1 of 2 NDFD procedures: 8 labs used procedures related to the 1970 Goering and Van Soest (GVS) procedure using fermentation vessels or filter bags, and 2 used a procedure with preincubated inoculum (PInc). Means and standard deviations (SD) of sample replicates within run within laboratory (lab) were evaluated with a statistical model that included lab, run within lab, sample, and lab × sample interaction as factors. All factors affected mean values for 30-h NDFD. The lab × sample effect suggests against a simple lab bias in mean values. The SD ranged from 0.49 to 3.37% NDFD and were influenced by lab and run within lab. The GVS procedure gave greater NDFD values than PInc, with an average difference across all samples of 17% NDFD. Because of the differences between GVS and PInc, we recommend using results in contexts appropriate to each procedure. The 95% probability limits for within-lab repeatability and among-lab reproducibility for GVS mean values were 10.2 and 13.4%, respectively. These percentages describe the span of the range around the mean into which 95% of analytical results for a sample fall for values generated within a lab and among labs. This degree of precision was supported in that the average maximum
Pain beliefs and problems in functioning among people with arthritis: a meta-analytic review.
Jia, Xiaojun; Jackson, Todd
2016-10-01
In this meta-analysis, we evaluated overall strengths of relation between beliefs about pain, health, or illness and problems in functioning (i.e., functional impairment, affective distress, pain severity) in osteoarthritis and rheumatoid arthritis samples as well as moderators of these associations. In sum, 111 samples (N = 17,365 patients) met inclusion criteria. On average, highly significant, medium effect sizes were observed for associations between beliefs and problems in functioning but heterogeneity was also inflated. Effect sizes were not affected by arthritis subtype, gender, or age. However, pain belief content emerged as a significant moderator, with larger effect sizes for studies in which personal incapacity or ineffectiveness in controlling pain was a content theme of belief indices (i.e., pain catastrophizing, helplessness, self-efficacy) compared to those examining locus of control and fear/threat/harm beliefs. Furthermore, analyses of longitudinal study subsets supported the status of pain beliefs risk factors for later problems in functioning in these groups.
International Nuclear Information System (INIS)
May, S.; Piccot, D.
1984-01-01
In reactor dismantling residual radioacting of concrete used, especially in biological shield can brought problems for treatment and disposal. Radioactivity of concrete from reactors can be forecasted if element content is known. Elements producing long life radionuclides are: chlorine, calcium nickel, cobalt, niobium, europium and samarium. Neutron activation analysis is used for determination of these elements whithout chemical separation for Ca, Co, Eu and Sm and with radiochemical separation for Cl, Ni and Nb. A lot of elements, less interesting are also determined by gamma spectrometry after irradiation. It was possible to determine 29 elements in 21 concrete samples from different European Community reactors
International Nuclear Information System (INIS)
Garcia Alonso, S.; Perez Pastor, R. M.
2013-01-01
A study on the optimization and development of a chromatographic method for the determination of gallic and picric acids in pyrotechnic samples is presented. In order to achieve this, both analytical conditions by HPLC with diode detection and extraction step of a selected sample were studied. (Author)
Directory of Open Access Journals (Sweden)
M. I. Popov
2016-01-01
Full Text Available The approximate analytical solution of a problem about nonstationary free convection in the conductive and laminar mode of the Newtonian liquid in square area at the instantaneous change of temperature of a sidewall and lack of heat fluxes is submitted on top and bottom the bases. The equations of free convection in an approximation of Oberbeka-Bussinesk are linearized due to neglect by convective items. For reduction of number of hydrothermal parameters the system is given to the dimensionless look by introduction of scales for effect and explanatory variables. Transition from classical variables to the variables "whirlwind-a flow function" allowed to reduce system to a nonstationary heat conduction equation and a nonstationary nonuniform biharmonic equation, and the first is not dependent on the second. The decision in the form of a flow function is received by application integral a sine - Fourier transforms with terminating limits to a biharmonic equation at first on a variable x, and then on a variable y. The flow function has an appearance of a double series of Fourier on sine with coefficients in an integral form. Coefficients of a row represent integrals from unknown functions. On the basis of a hypothesis of an express type of integrals coefficients are calculated from the linear equation system received from boundary conditions on partial derivatives of function. Dependence of structure of a current on Prandtl's number is investigated. The cards of streamlines and isolines of components of speed describing development of a current from the moment of emergence before transition to a stationary state are received. The schedules of a field of vectors of speeds in various time illustrating dynamics of a current are provided. Reliability of a hypothesis of an express type of integral coefficients is confirmed by adequacy to physical sense and coherence of the received results with the numerical solution of a problem.
Optimization of an analytical electron microscope for x-ray microanalysis: instrumental problems
International Nuclear Information System (INIS)
Bentley, J.; Zaluzec, N.J.; Kenik, E.A.; Carpenter, R.W.
1979-01-01
The addition of an energy dispersive x-ray spectrometer to a modern transmission or scanning transmission electron microscope can provide a powerful tool in the characterization of the materials. Unfortunately this seemingly simple modification can lead to a host of instrumental problems with respect to the accuracy, validity, and quality of the recorded information. This tutorial reviews the complications which can arise in performing x-ray microanalysis in current analytical electron microscopes. The first topic treated in depth is fluorescence by uncollimated radiation. The source, distinguishing characteristics, effects on quantitative analysis and schemes for elimination or minimization as applicable to TEM/STEMs, D-STEMs and HVEMs are discussed. The local specimen environment is considered in the second major section where again detrimental effects on quantitative analysis and remedial procedures, particularly the use of low-background specimen holers, are highlighted. Finally, the detrimental aspects of specimen contamination, insofar as they affect x-ray microanalysis, are discussed. It is concluded that if the described preventive measures are implemented, reliable quantitative analysis is possible
Problem of cadmium, arsenic and zinc determination in enviroment samples
International Nuclear Information System (INIS)
Malyugin, M.S.; Luzhnova, M.A.; Lontsikh, S.V.
1983-01-01
Using the emission spectroscopy technique new information has been obtained on cadmium-, arsenic and zinc content in some reference samples (RS) of rocks and soils not previously certified as to the content of the elements, as well as in nealy issued RS of soils. Metrologic estimation of the results obtained is carried out. A comparison with the atomic-absorption analysis data as well as with those of the neutron-activation-, colorimetric and other methods of the analysis permits to refer to the advantages of using the spectrographic determination technique based on fractionated evaporation, in case of determining cadmium and arsenic in rocks and soils. Consideration of the results of cadmium, arsenic and zinc spectrography contributed greatly to the certification of reference samples of soils
Bayesian Simultaneous Estimation for Means in k Sample Problems
Imai, Ryo; Kubokawa, Tatsuya; Ghosh, Malay
2017-01-01
This paper is concerned with the simultaneous estimation of k population means when one suspects that the k means are nearly equal. As an alternative to the preliminary test estimator based on the test statistics for testing hypothesis of equal means, we derive Bayesian and minimax estimators which shrink individual sample means toward a pooled mean estimator given under the hypothesis. Interestingly, it is shown that both the preliminary test estimator and the Bayesian minimax shrinkage esti...
Laboratory Detective Work Identifies a Mishandling Problem in Sample Aliquoting
Zhu, Claire; Pinsky, Paul; Huang, Wen-Yi; Purdue, Mark
2014-01-01
Data from a recent ovarian cancer biomarker study using serum aliquots from the Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial Biorepository showed that CA125II concentrations in these aliquots were significantly lower than those previously measured in the same subjects from the same blood draw. We designed an experiment to investigate whether samples used in the study (reference study) were compromised during the aliquoting process. We measured CA125II in the “sister” ...
International Nuclear Information System (INIS)
Kuddusi, Luetfullah; Denton, Jesse C.
2007-01-01
The constructal solution for cooling of electronics requires solution of a fundamental heat conduction problem in a composite slab composed of a heat generating slab and a thin strip of high conductivity material that is responsible for discharging the generated heat to a heat sink located at one end of the strip. The fundamental 2D heat conduction problem is solved analytically by applying an integral transform method. The analytical solution is then employed in a constructal solution, following Bejan, for cooling of electronics. The temperature and heat flux distributions of the elemental heat generating slabs are assumed to be the same as those of the analytical solution in all the elemental volumes and the high conductivity strips distributed in the different constructs. Although the analytical solution of the fundamental 2D heat conduction problem improves the accuracy of the distributions in the elemental slabs, the results following Bejan's strategy do not affirm the accuracy of Bejan's constructal solution itself as applied to this problem of cooling of electronics. Several different strategies are possible for developing a constructal solution to this problem as is indicated
Directory of Open Access Journals (Sweden)
M. Amodio
2014-01-01
Full Text Available The atmosphere is a carrier on which some natural and anthropogenic organic and inorganic chemicals are transported, and the wet and dry deposition events are the most important processes that remove those chemicals, depositing it on soil and water. A wide variety of different collectors were tested to evaluate site-specificity, seasonality and daily variability of settleable particle concentrations. Deposition fluxes of POPs showed spatial and seasonal variations, diagnostic ratios of PAHs on deposited particles, allowed the discrimination between pyrolytic or petrogenic sources. Congener pattern analysis and bulk deposition fluxes in rural sites confirmed long-range atmospheric transport of PCDDs/Fs. More and more sophisticated and newly designed deposition samplers have being used for characterization of deposited mercury, demonstrating the importance of rain scavenging and the relatively higher magnitude of Hg deposition from Chinese anthropogenic sources. Recently biological monitors demonstrated that PAH concentrations in lichens were comparable with concentrations measured in a conventional active sampler in an outdoor environment. In this review the authors explore the methodological approaches used for the assessment of atmospheric deposition, from the analysis of the sampling methods, the analytical procedures for chemical characterization of pollutants and the main results from the scientific literature.
Energy Technology Data Exchange (ETDEWEB)
Herrero-Latorre, C., E-mail: carlos.herrero@usc.es; Álvarez-Méndez, J.; Barciela-García, J.; García-Martín, S.; Peña-Crecente, R.M.
2015-01-01
Highlights: • Analytical techniques for characterization of CNTs: classification, description and examples. • Determination methods for CNTs in biological and environmental samples. • Future trends and perspectives for characterization and determination of CNTs. - Abstract: In the present paper, a critical overview of the most commonly used techniques for the characterization and the determination of carbon nanotubes (CNTs) is given on the basis of 170 references (2000–2014). The analytical techniques used for CNT characterization (including microscopic and diffraction, spectroscopic, thermal and separation techniques) are classified, described, and illustrated with applied examples. Furthermore, the performance of sampling procedures as well as the available methods for the determination of CNTs in real biological and environmental samples are reviewed and discussed according to their analytical characteristics. In addition, future trends and perspectives in this field of work are critically presented.
International Nuclear Information System (INIS)
Ahmed, Y. A.; Ewa, I.O.B.; Funtua, I.I.; Jonah, S.A.; Landsberger, S.
2007-01-01
Compton Suppression Factors (SF) and Compton Reduction Factors (RF) of the UT Austin's Compton suppression spectrometer being parameters characterizing the system performance were measured using ''1''3''7Cs and ''6''0Co point sources. The system performance was evaluated as a function of energy and geometry. The (P/C), A(P/C), (P/T), Cp, and Ce were obtained for each of the parameters. The natural background reduction factor in the anticoincidence mode and that of normal mode was calculated and its effect on the detection limit of biological samples evaluated. Applicability of the spectrometer and the method for biological samples was tested in the measurement of twenty-four elements (Ba, Sr, I, Br, Cu, V, Mg, Na, Cl, Mn, Ca, Sn, In, K, Mo, Cd, Zn, As, Sb, Ni, Rb, Cs, Fe, and Co) commonly found in food, milk, tea and tobacco items. They were determined from seven National Institute for Standard and Technology (NIST) certified reference materials (rice flour, oyster tissue, non-fat powdered milk, peach leaves, tomato leaves, apple leaves, and citrus leaves). Our results shows good agreement with the NIST certified values, indicating that the method developed in the present study is suitable for the determination of aforementioned elements in biological samples without undue interference problems
Jackson, C. E., Jr.
1977-01-01
A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.
Thorsland, Martin N.; Novak, Joseph D.
1974-01-01
Described is an approach to assessment of intuitive and analytic modes of thinking in physics. These modes of thinking are associated with Ausubel's theory of learning. High ability in either intuitive or analytic thinking was associated with success in college physics, with high learning efficiency following a pattern expected on the basis of…
System effects in sample self-stacking CZE: Single analyte peak splitting of salt-containing samples
Czech Academy of Sciences Publication Activity Database
Malá, Zdeňka; Gebauer, Petr; Boček, Petr
2009-01-01
Roč. 30, č. 5 (2009), s. 866-874 ISSN 0173-0835 R&D Projects: GA ČR GA203/08/1536; GA AV ČR IAA400310609; GA AV ČR IAA400310703 Institutional research plan: CEZ:AV0Z40310501 Keywords : CZE * peak splitting * self-stacking Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 3.077, year: 2009
Review of Analytes of Concern and Sample Methods for Closure of DOE High Level Waste Storage Tanks
International Nuclear Information System (INIS)
Thomas, T.R.
2002-01-01
Sampling residual waste after tank cleaning and analysis for analytes of concern to support closure and cleaning targets of large underground tanks used for storage of legacy high level radioactive waste (HLW) at Department of Energy (DOE) sites has been underway since about 1995. The DOE Tanks Focus Area (TFA) has been working with DOE tank sites to develop new sampling plans, and sampling methods for assessment of residual waste inventories. This paper discusses regulatory analytes of concern, sampling plans, and sampling methods that support closure and cleaning target activities for large storage tanks at the Hanford Site, the Savannah River Site (SRS), the Idaho National Engineering and Environmental Laboratory (INEEL), and the West Valley Demonstration Project (WVDP)
Expressing analytical performance from multi-sample evaluation in laboratory EQA.
Thelen, Marc H M; Jansen, Rob T P; Weykamp, Cas W; Steigstra, Herman; Meijer, Ron; Cobbaert, Christa M
2017-08-28
To provide its participants with an external quality assessment system (EQAS) that can be used to check trueness, the Dutch EQAS organizer, Organization for Quality Assessment of Laboratory Diagnostics (SKML), has innovated its general chemistry scheme over the last decade by introducing fresh frozen commutable samples whose values were assigned by Joint Committee for Traceability in Laboratory Medicine (JCTLM)-listed reference laboratories using reference methods where possible. Here we present some important innovations in our feedback reports that allow participants to judge whether their trueness and imprecision meet predefined analytical performance specifications. Sigma metrics are used to calculate performance indicators named 'sigma values'. Tolerance intervals are based on both Total Error allowable (TEa) according to biological variation data and state of the art (SA) in line with the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Milan consensus. The existing SKML feedback reports that express trueness as the agreement between the regression line through the results of the last 12 months and the values obtained from reference laboratories and calculate imprecision from the residuals of the regression line are now enriched with sigma values calculated from the degree to which the combination of trueness and imprecision are within tolerance limits. The information and its conclusion to a simple two-point scoring system are also graphically represented in addition to the existing difference plot. By adding sigma metrics-based performance evaluation in relation to both TEa and SA tolerance intervals to its EQAS schemes, SKML provides its participants with a powerful and actionable check on accuracy.
Laser ablation for analytical sampling: what can we learn from modeling?
International Nuclear Information System (INIS)
Bogaerts, Annemie; Chen Zhaoyang; Gijbels, Renaat; Vertes, Akos
2003-01-01
The paper is built up in two parts. First, a rather comprehensive introduction is given, with a brief overview of the different application fields of laser ablation, focusing mainly on the analytical applications, and an overview of the different modeling approaches available for laser ablation. Further, a discussion is presented here about the laser evaporated plume expansion in vacuum or in a background gas, as well as about the different mechanisms for particle formation in the laser ablation process, which is most relevant for laser ablation as solid sampling technique for inductively coupled plasma (ICP) spectrometry. In the second part, a model is presented that describes the interaction of an ns-pulsed laser with a Cu target, as well as the resulting plume expansion and plasma formation. The results presented here, include the temperature distribution in the target, the melting and evaporation of the target, the vapor density, velocity and temperature distribution in the evaporated plume, the ionization degree and the density profiles of Cu 0 atoms, Cu + and Cu 2+ ions and electrons in the plume (plasma), as well as the resulting plasma shielding of the incoming laser beam. Results are presented as a function of time during and after the laser pulse, and as a function of position in the target or in the plume. The influence of the target reflection coefficient on the above calculation results is investigated. Finally, the effect of the laser pulse fluence on the target heating, melting and vaporization, and on the plume characteristics and plasma formation is studied. Our modeling results are in reasonable agreement with calculated and measured data from literature
Analytical strategies for uranium determination in natural water and industrial effluents samples
International Nuclear Information System (INIS)
Santos, Juracir Silva
2011-01-01
The work was developed under the project 993/2007 - 'Development of analytical strategies for uranium determination in environmental and industrial samples - Environmental monitoring in the Caetite city, Bahia, Brazil' and made possible through a partnership established between Universidade Federal da Bahia and the Comissao Nacional de Energia Nuclear. Strategies were developed to uranium determination in natural water and effluents of uranium mine. The first one was a critical evaluation of the determination of uranium by inductively coupled plasma optical emission spectrometry (ICP OES) performed using factorial and Doehlert designs involving the factors: acid concentration, radio frequency power and nebuliser gas flow rate. Five emission lines were simultaneously studied (namely: 367.007, 385.464, 385.957, 386.592 and 409.013 nm), in the presence of HN0 3 , H 3 C 2 00H or HCI. The determinations in HN0 3 medium were the most sensitive. Among the factors studied, the gas flow rate was the most significant for the five emission lines. Calcium caused interference in the emission intensity for some lines and iron did not interfere (at least up to 10 mg L -1 ) in the five lines studied. The presence of 13 other elements did not affect the emission intensity of uranium for the lines chosen. The optimized method, using the line at 385.957 nm, allows the determination of uranium with limit of quantification of 30 μg L -1 and precision expressed as RSD lower than 2.2% for uranium concentrations of either 500 and 1000 μg L -1 . In second one, a highly sensitive flow-based procedure for uranium determination in natural waters is described. A 100-cm optical path flow cell based on a liquid-core waveguide (LCW) was exploited to increase sensitivity of the arsenazo 111 method, aiming to achieve the limits established by environmental regulations. The flow system was designed with solenoid micro-pumps in order to improve mixing and minimize reagent consumption, as well as
International Nuclear Information System (INIS)
Ingeneri, Kristofer; Riciputi, L.
2001-01-01
Following successful field trials, environmental sampling has played a central role as a routine part of safeguards inspections since early 1996 to verify declared and to detect undeclared activity. The environmental sampling program has brought a new series of analytical challenges, and driven a need for advances in verification technology. Environmental swipe samples are often extremely low in concentration of analyte (ng level or lower), yet the need to analyze these samples accurately and precisely is vital, particularly for the detection of undeclared nuclear activities. Thermal ionization mass spectrometry (TIMS) is the standard method of determining isotope ratios of uranium and plutonium in the environmental sampling program. TIMS analysis typically employs 1-3 filaments to vaporize and ionize the sample, and the ions are mass separated and analyzed using magnetic sector instruments due to their high mass resolution and high ion transmission. However, the ionization efficiency (the ratio of material present to material actually detected) of uranium using a standard TIMS instrument is low (0.2%), even under the best conditions. Increasing ionization efficiency by even a small amount would have a dramatic impact for safeguards applications, allowing both improvements in analytical precision and a significant decrease in the amount of uranium and plutonium required for analysis, increasing the sensitivity of environmental sampling
International Nuclear Information System (INIS)
Gonis, Antonios; Daene, Markus W.; Nicholson, Don M.; Stocks, George Malcolm
2012-01-01
We have developed and tested in terms of atomic calculations an exact, analytic and computationally simple procedure for determining the functional derivative of the exchange energy with respect to the density in the implementation of the Kohn Sham formulation of density functional theory (KS-DFT), providing an analytic, closed-form solution of the self-interaction problem in KS-DFT. We demonstrate the efficacy of our method through ground-state calculations of the exchange potential and energy for atomic He and Be atoms, and comparisons with experiment and the results obtained within the optimized effective potential (OEP) method.
The problem of large samples. An activation analysis study of electronic waste material
International Nuclear Information System (INIS)
Segebade, C.; Goerner, W.; Bode, P.
2007-01-01
Large-volume instrumental photon activation analysis (IPAA) was used for the investigation of shredded electronic waste material. Sample masses from 1 to 150 grams were analyzed to obtain an estimate of the minimum sample size to be taken to achieve a representativeness of the results which is satisfactory for a defined investigation task. Furthermore, the influence of irradiation and measurement parameters upon the quality of the analytical results were studied. Finally, the analytical data obtained from IPAA and instrumental neutron activation analysis (INAA), both carried out in a large-volume mode, were compared. Only parts of the values were found in satisfactory agreement. (author)
International Nuclear Information System (INIS)
Birchall, A.
1989-04-01
Intakes of radionuclides are estimated with the personal air sampler (PAS) and by biological monitoring techniques: in the case of plutonium, there are problems with both methods. The statistical variation in activity collected when sampling radioactive aerosols with low number concentrations was investigated. It was shown that the PAS is barely adequate for monitoring plutonium at annual limit of intake (ALI) levels in typical workplace conditions. Two algorithms were developed, enabling non-recycling and recycling compartmental models to be solved. Their accuracy and speed were investigated, and methods of dealing with partitioning, continuous intake, and radioactive progeny were discussed. Analytical, rather than numerical, methods were used. These are faster, and thus ideally suited for implementation on microcomputers. The algorithms enable non-specialists to solve quickly and easily any first order compartmental model, including all the ICRP metabolic models. Non-recycling models with up to 50 compartments can be solved in seconds: recycling models take a little longer. A biokinetic model for plutonium in man following systemic uptake was developed. The proposed ICRP lung model (1989) was represented by a first order compartmental model. These two models were combined, and the recycling algorithm was used to calculate urinary and faecal excretion of plutonium following acute or chronic intake by inhalation. The results indicate much lower urinary excretion than predicted by ICRP Publication 54. (author)
Ficklin, W.H.; Nowlan, G.A.; Preston, D.J.
1983-01-01
Water samples were collected in the vicinity of Jackman, Maine as a part of the study of the relationship of dissolved constituents in water to the sediments subjacent to the water. Each sample was analyzed for specific conductance, alkalinity, acidity, pH, fluoride, chloride, sulfate, phosphate, nitrate, sodium, potassium, calcium, magnesium, and silica. Trace elements determined were copper, zinc, molybdenum, lead, iron, manganese, arsenic, cobalt, nickel, and strontium. The longitude and latitude of each sample location and a sample site map are included in the report as well as a table of the analytical results.
Analytic Solution to the Problem of Aircraft Electric Field Mill Calibration
Koshak, W. J.
2003-12-01
It is by no means a simple task to retrieve storm electric fields from an aircraft instrumented with electric field mill sensors. The presence of the aircraft distorts the ambient field in a complicated way. Before retrievals of the storm field can be made, the field mill measurement system must be "calibrated". In other words, a relationship between impressed (i.e., ambient) electric field and mill output must be established. If this relationship can be determined, it is mathematically inverted so that ambient field can be inferred from the mill outputs. Previous studies have primarily focused on linear theories where the "relationship" between ambient field and mill output is described by a "calibration matrix" M. Each element of the matrix describes how a particular component of the ambient field is enhanced by the aircraft. For example the product MixEx is the contribution of the Ex field to the ith mill output. Similarly, net aircraft charge (described by a "charge field component" Eq) contributes an amount MiqEq to the output of the ith sensor. The central difficulty in obtaining M stems from the fact that the impressed field (Ex, Ey, Ez, Eq) is not known but is instead estimated. Typically, the aircraft is flown through a series of roll and pitch maneuvers in fair weather, and the values of the fair weather field and aircraft charge are estimated at each point along the aircraft trajectory. These initial estimates are often highly inadequate, but several investigators have improved the estimates by implementing various (ad hoc) iterative methods. Though numerical tests show that some of the iterative methods do improve the initial estimates, none of the iterative methods guarantee absolute convergence to the true values, or even to values reasonably close to the true values when measurement errors are present. In this work, the mathematical problem is solved directly by analytic means. For m mills installed on an arbitrary aircraft, it is shown that it is
Kałużna-Czaplińska, Joanna; Rosiak, Angelina; Kwapińska, Marzena; Kwapiński, Witold
2016-01-01
The analysis of the composition of organic residues present in pottery is an important source of information for historians and archeologists. Chemical characterization of the materials provides information on diets, habits, technologies, and original use of the vessels. This review presents the problem of analytical studies of archeological materials with a special emphasis on organic residues. Current methods used in the determination of different organic compounds in archeological ceramics are presented. Particular attention is paid to the procedures of analysis of archeological ceramic samples used before gas chromatography-mass spectrometry. Advantages and disadvantages of different extraction methods and application of proper quality assurance/quality control procedures are discussed.
Directory of Open Access Journals (Sweden)
Aisha Noreen
2016-06-01
Full Text Available Meloxicam (MX belongs to the family of oxicams which is the most important group of non steroidal anti-inflammatory drugs (NSAIDs and is widely used for their analgesics and antipyretic activities. It inhibits both COX-I and COX-II enzymes with less gastric and local tissues irritation. A number of analytical techniques have been used for the determination of MX in pharmaceutical as well as in biological fluids. These techniques include titrimetry, spectrometry, chromatography, flow injection spectrometry, fluorescence spectrometry, capillary zone electrophoresis and electrochemical techniques. Many of these techniques have also been used for the simultaneous determination of MX with other compounds. A comprehensive review of these analytical techniques has been done which could be useful for the analytical chemists and quality control pharmacists.
Xu, Wei
2007-12-01
This study adopts J. Rasmussen's (1985) abstraction hierarchy (AH) framework as an analytical tool to identify problems and pinpoint opportunities to enhance complex systems. The process of identifying problems and generating recommendations for complex systems using conventional methods is usually conducted based on incompletely defined work requirements. As the complexity of systems rises, the sheer mass of data generated from these methods becomes unwieldy to manage in a coherent, systematic form for analysis. There is little known work on adopting a broader perspective to fill these gaps. AH was used to analyze an aircraft-automation system in order to further identify breakdowns in pilot-automation interactions. Four steps follow: developing an AH model for the system, mapping the data generated by various methods onto the AH, identifying problems based on the mapped data, and presenting recommendations. The breakdowns lay primarily with automation operations that were more goal directed. Identified root causes include incomplete knowledge content and ineffective knowledge structure in pilots' mental models, lack of effective higher-order functional domain information displayed in the interface, and lack of sufficient automation procedures for pilots to effectively cope with unfamiliar situations. The AH is a valuable analytical tool to systematically identify problems and suggest opportunities for enhancing complex systems. It helps further examine the automation awareness problems and identify improvement areas from a work domain perspective. Applications include the identification of problems and generation of recommendations for complex systems as well as specific recommendations regarding pilot training, flight deck interfaces, and automation procedures.
International Nuclear Information System (INIS)
Stefanovic, D.B.
1970-12-01
The objective of this work is to describe the new analytical solution of the neutron slowing down equation for infinite monoatomic media with arbitrary energy dependence of cross section. The solution is obtained by introducing Green slowing down functions instead of starting from slowing down equations directly. The previously used methods for calculation of fission neutron spectra in the reactor cell were numerical. The proposed analytical method was used for calculating the space-energy distribution of fast neutrons and number of neutron reactions in a thermal reactor cell. The role of analytical method in solving the neutron slowing down in reactor physics is to enable understating of the slowing down process and neutron transport. The obtained results could be used as standards for testing the accuracy od approximative and practical methods
Directory of Open Access Journals (Sweden)
Soheil Salahshour
2015-02-01
Full Text Available In this paper, we apply the concept of Caputo’s H-differentiability, constructed based on the generalized Hukuhara difference, to solve the fuzzy fractional differential equation (FFDE with uncertainty. This is in contrast to conventional solutions that either require a quantity of fractional derivatives of unknown solution at the initial point (Riemann–Liouville or a solution with increasing length of their support (Hukuhara difference. Then, in order to solve the FFDE analytically, we introduce the fuzzy Laplace transform of the Caputo H-derivative. To the best of our knowledge, there is limited research devoted to the analytical methods to solve the FFDE under the fuzzy Caputo fractional differentiability. An analytical solution is presented to confirm the capability of the proposed method.
Directory of Open Access Journals (Sweden)
Bulat Kenessov
2015-12-01
Full Text Available Most rockets of middle and heavy class launched from Kazakhstan, Russia, China and other countries still use highly toxic unsymmetrical dimethylhydrazine (UDMH as a liquid propellant. Study of migration, distribution and accumulation of UDMH transformation products in environment and human health impact assessment of space rocket activity are currently complicated due to the absence of analytical methods allowing detection of trace concentrations of these compounds in analyzed samples. This paper reviews methods and approaches, which can be applied for development of such methods. Detection limits at a part-per-trillion (ppt level may be achieved using most selective and sensitive methods based on gas or liquid chromatography in combination of tandem or high-resolution mass spectrometry. In addition, 1000-fold concentration of samples or integrated sample preparation methods, e.g., dynamic headspace extraction, are required. Special attention during development and application of such methods must be paid to purity of laboratory air, reagents, glassware and analytical instruments.
Recent bibliography on analytical and sampling problems of a PWR primary coolant Suppl. 5
International Nuclear Information System (INIS)
Illy, H.
1987-11-01
The present supplement reviews the subsequent literature following 5 bibliographies from 1980 to 1986 each, up till September 1987. It also includes some references overlooked in the first five volumes. The serial numbers are continued from the first five bibliographies. Cross-referencing was not intended. This bibliographical supplement of 161 references is arranged in alphabetical order; within each topic the references are listed alphabetically according to the name of the first author of each work. Works are in English unless otherwise marked. (author)
Directory of Open Access Journals (Sweden)
Takashi Ito
2016-01-01
Full Text Available Terms in the analytic expansion of the doubly averaged disturbing function for the circular restricted three-body problem using the Legendre polynomial are explicitly calculated up to the fourteenth order of semimajor axis ratio (α between perturbed and perturbing bodies in the inner case (α1. The expansion outcome is compared with results from numerical quadrature on an equipotential surface. Comparison with direct numerical integration of equations of motion is also presented. Overall, the high-order analytic expansion of the doubly averaged disturbing function yields a result that agrees well with the numerical quadrature and with the numerical integration. Local extremums of the doubly averaged disturbing function are quantitatively reproduced by the high-order analytic expansion even when α is large. Although the analytic expansion is not applicable in some circumstances such as when orbits of perturbed and perturbing bodies cross or when strong mean motion resonance is at work, our expansion result will be useful for analytically understanding the long-term dynamical behavior of perturbed bodies in circular restricted three-body systems.
Analytic theory of curvature effects for wave problems with general boundary conditions
DEFF Research Database (Denmark)
Willatzen, Morten; Gravesen, Jens; Voon, L. C. Lew Yan
2010-01-01
A formalism based on a combination of differential geometry and perturbation theory is used to obtain analytic expressions for confined eigenmode changes due to general curvature effects. In cases of circular-shaped and helix-shaped structures, where alternative analytic solutions can be found......, the perturbative solution is shown to yield the same result. The present technique allows the generalization of earlier results to arbitrary boundary conditions. The power of the method is illustrated using examples based on Maxwell’s and Schrödinger’s equations for applications in photonics and nanoelectronics....
40 CFR 141.23 - Inorganic chemical sampling and analytical requirements.
2010-07-01
... may allow a groundwater system to reduce the sampling frequency to annually after four consecutive... this section. (a) Monitoring shall be conducted as follows: (1) Groundwater systems shall take a... system shall take each sample at the same sampling point unless conditions make another sampling point...
Directory of Open Access Journals (Sweden)
Rainer Diaz-Bone
2006-05-01
Full Text Available Abstract: The German discourse researcher Siegfried JÄGER from Duisburg is the first to have published a German-language book about the methodology of discourse analysis after FOUCAULT. JÄGER integrates in his work the discourse analytic work of Jürgen LINK as well as the interdisciplinary discussion carried on in the discourse analytic journal "kultuRRevolution" (Journal for Applied Discourse Analysis. JÄGER and his co-workers were associated with the Duisburger Institute for Language Research and Social Research (DISS, see http://www.diss-duisburg.de/ for 20 years, developing discourse theory and the methodology of discourse analysis. The interview was done via e-mail. It depicts the discourse analytic approach of JÄGER and his co-workers following the works of FOUCAULT and LINK. The interview reconstructs JÄGERs vita and his academic career. Further topics of the interview are the agenda of JÄGERs discourse studies, methodological considerations, the (problematic relationship between FOUCAULDian discourse analysis and (discourses, linguistics, styles and organization of research and questions concerning applied discourse analytic research as a form of critical intervention. URN: urn:nbn:de:0114-fqs0603219
OECD Publishing, 2017
2017-01-01
What is important for citizens to know and be able to do? The OECD Programme for International Student Assessment (PISA) seeks to answer that question through the most comprehensive and rigorous international assessment of student knowledge and skills. The PISA 2015 Assessment and Analytical Framework presents the conceptual foundations of the…
International Nuclear Information System (INIS)
Filho, J. F. P.; Barichello, L. B.
2013-01-01
In this work, an analytical discrete ordinates method is used to solve a nodal formulation of a neutron transport problem in x, y-geometry. The proposed approach leads to an important reduction in the order of the associated eigenvalue systems, when combined with the classical level symmetric quadrature scheme. Auxiliary equations are proposed, as usually required for nodal methods, to express the unknown fluxes at the boundary introduced as additional unknowns in the integrated equations. Numerical results, for the problem defined by a two-dimensional region with a spatially constant and isotropically emitting source, are presented and compared with those available in the literature. (authors)
Hilliard, Mark; Alley, William R; McManus, Ciara A; Yu, Ying Qing; Hallinan, Sinead; Gebler, John; Rudd, Pauline M
Glycosylation is an important attribute of biopharmaceutical products to monitor from development through production. However, glycosylation analysis has traditionally been a time-consuming process with long sample preparation protocols and manual interpretation of the data. To address the challenges associated with glycan analysis, we developed a streamlined analytical solution that covers the entire process from sample preparation to data analysis. In this communication, we describe the complete analytical solution that begins with a simplified and fast N-linked glycan sample preparation protocol that can be completed in less than 1 hr. The sample preparation includes labelling with RapiFluor-MS tag to improve both fluorescence (FLR) and mass spectral (MS) sensitivities. Following HILIC-UPLC/FLR/MS analyses, the data are processed and a library search based on glucose units has been included to expedite the task of structural assignment. We then applied this total analytical solution to characterize the glycosylation of the NIST Reference Material mAb 8761. For this glycoprotein, we confidently identified 35 N-linked glycans and all three major classes, high mannose, complex, and hybrid, were present. The majority of the glycans were neutral and fucosylated; glycans featuring N-glycolylneuraminic acid and those with two galactoses connected via an α1,3-linkage were also identified.
International Nuclear Information System (INIS)
Basu, A.K.; Bhadkambekar, C.A.; Tripathi, A.B.R.; Chattopadhyay, N.; Ghosh, P.
2010-01-01
Nuclear approaches for compositional characterization has bright application prospect in forensic perspective towards assessment of nature and origin of seized material. The macro and micro physical properties of nuclear materials can be specifically associated with a process or type of nuclear activity. Under the jurisdiction of nuclear analytical chemistry as well as nuclear forensics, thrust areas of scientific endeavor like determination of radioisotopes, isotopic and mass ratios, analysis for impurity contents, arriving at chemical forms/species and physical parameters play supporting evidence in forensic investigations. The analytical methods developed for this purposes can be used in international safeguards as well for nuclear forensics. Nuclear material seized in nuclear trafficking can be identified and a profile of the nuclear material can be created
Quantification of process induced disorder in milled samples using different analytical techniques
DEFF Research Database (Denmark)
Zimper, Ulrike; Aaltonen, Jaakko; McGoverin, Cushla M.
2012-01-01
The aim of this study was to compare three different analytical methods to detect and quantify the amount of crystalline disorder/ amorphousness in two milled model drugs. X-ray powder diffraction (XRPD), differential scanning calorimetry (DSC) and Raman spectroscopy were used as analytical methods...... and indomethacin and simvastatin were chosen as the model compounds. These compounds partly converted from crystalline to disordered forms by milling. Partial least squares regression (PLS) was used to create calibration models for the XRPD and Raman data, which were subsequently used to quantify the milling......-induced crystalline disorder/ amorphousness under different process conditions. In the DSC measurements the change in heat capacity at the glass transition was used for quantification. Differently prepared amorphous indomethacin standards (prepared by either melt quench cooling or cryo milling) were compared...
DEFF Research Database (Denmark)
Hansen, Michael Møller; Eg Nielsen, Einar; Mensberg, Karen-Lise Dons
1997-01-01
In species exhibiting a nonrandom distribution of closely related individuals, sampling of a few families may lead to biased estimates of allele frequencies in populations. This problem was studied in two brown trout populations, based on analysis of mtDNA and microsatellites. In both samples mt......DNA haplotype frequencies differed significantly between age classes, and in one sample 17 out of 18 individuals less than 1 year of age shared one particular mtDNA haplotype. Estimates of relatedness showed that these individuals most likely represented only three full-sib families. Older trout exhibiting...
International Nuclear Information System (INIS)
Femec, D.A.
1995-09-01
This report discusses the sample tracking database in use at the Idaho National Engineering Laboratory (INEL) by the Radiation Measurements Laboratory (RML) and Analytical Radiochemistry. The database was designed in-house to meet the specific needs of the RML and Analytical Radiochemistry. The report consists of two parts, a user's guide and a reference guide. The user's guide presents some of the fundamentals needed by anyone who will be using the database via its user interface. The reference guide describes the design of both the database and the user interface. Briefly mentioned in the reference guide are the code-generating tools, CREATE-SCHEMA and BUILD-SCREEN, written to automatically generate code for the database and its user interface. The appendices contain the input files used by the these tools to create code for the sample tracking database. The output files generated by these tools are also included in the appendices
Tank 241-AP-103 08/1999 Compatibility Grab Samples, Analytical Results for the Final Report
International Nuclear Information System (INIS)
BELL, K.E.
1999-01-01
This document is the format IV, final report for the tank 241-AP-103 (AP-103) grab samples taken in August 1999 to address waste compatibility concerns. Chemical, radiochemical, and physical analyses on the tank AP-103 samples were performed as directed in ''Compatibility Grub Sampling and Analysis Plan for Fiscal Year 1999'' (Sasaki 1999a). Any deviations from the instructions provided in the tank sampling and analysis plan (TSAP) were discussed in this narrative. No notification limits were exceeded
On analytical solutions to the problem of the Coulomb and confining potentials
International Nuclear Information System (INIS)
Dineykhan, M.; Nazmitdinov, R.G.
1997-01-01
The oscillator representation method is presented and applied to calculate the energy spectrum of the superposition of the Coulomb and the power-law potentials, the Coulomb and the Yukawa potentials. The method provides an efficient way to obtain analytical results for arbitrary set of parameters of the considered potentials. The energies of ground and excited states of a quantum system are in good agreement with the exact results
Tank 241-SY-102, January 2000 Compatibility Grab Samples Analytical Results for the Final Report
International Nuclear Information System (INIS)
BELL, K.E.
2000-01-01
This document is the format IV, final report for the tank 241-SY-102 (SY-102) grab samples taken in January 2000 to address waste compatibility concerns. Chemical, radiochemical, and physical analyses on the tank SY-102 samples were performed as directed in Comparability Grab Sampling and Analysis Plan for Fiscal Year 2000 (Sasaki 1999). No notification limits were exceeded. Preliminary data on samples 2SY-99-5, -6, and -7 were reported in ''Format II Report on Tank 241-SY-102 Waste Compatibility Grab Samples Taken in January 2000'' (Lockrem 2000). The data presented here represent the final results
P.P.J. van den Bosch; Edwin Tazelaar; M. Grimminck; Stijn Hoppenbrouwers; Bram Veenhuizen
2011-01-01
The objective of an energy management strategy for fuel cell hybrid propulsion systems is to minimize the fuel needed to provide the required power demand. This minimization is defined as an optimization problem. Methods such as dynamic programming numerically solve this optimization problem.
International Nuclear Information System (INIS)
Hall, S.K.; Eaton, M.D.; Williams, M.M.R.
2012-01-01
Highlights: ► Isogeometric analysis used to obtain solutions to the neutron diffusion equation. ► Exact geometry captured for a circular fuel pin within a square moderator. ► Comparisons are made between the finite element method and isogeometric analysis. ► Error and observed order of convergence found using an analytic solution. -- Abstract: In this paper the neutron diffusion equation is solved using Isogeometric Analysis (IGA), which is an attempt to generalise Finite Element Analysis (FEA) to include exact geometries. In contrast to FEA, the basis functions are rational functions instead of polynomials. These rational functions, called non-uniform rational B-splines, are used to capture both the geometry and approximate the solution. The method of manufactured solutions is used to verify a MatLab implementation of IGA, which is then applied to a pincell problem. This is a circular uranium fuel pin within a square block of graphite moderator. A new method is used to compute an analytic solution to a simplified version of this problem, and is then used to observe the order of convergence of the numerical scheme. Comparisons are made against quadratic finite elements for the pincell problem, and it is found that the disadvantage factor computed using IGA is less accurate. This is due to a cancellation of errors in the FEA solution. A modified pincell problem with vacuum boundary conditions is then considered. IGA is shown to outperform FEA in this situation.
Fuller, Nathaniel J.; Licata, Nicholas A.
2018-05-01
Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.
Kristensen, Anne F; Kristensen, Søren R; Falkmer, Ursula; Münster, Anna-Marie B; Pedersen, Shona
2018-05-01
The Calibrated Automated Thrombography (CAT) is an in vitro thrombin generation (TG) assay that holds promise as a valuable tool within clinical diagnostics. However, the technique has a considerable analytical variation, and we therefore, investigated the analytical and between-subject variation of CAT systematically. Moreover, we assess the application of an internal standard for normalization to diminish variation. 20 healthy volunteers donated one blood sample which was subsequently centrifuged, aliquoted and stored at -80 °C prior to analysis. The analytical variation was determined on eight runs, where plasma from the same seven volunteers was processed in triplicates, and for the between-subject variation, TG analysis was performed on plasma from all 20 volunteers. The trigger reagents used for the TG assays included both PPP reagent containing 5 pM tissue factor (TF) and PPPlow with 1 pM TF. Plasma, drawn from a single donor, was applied to all plates as an internal standard for each TG analysis, which subsequently was used for normalization. The total analytical variation for TG analysis performed with PPPlow reagent is 3-14% and 9-13% for PPP reagent. This variation can be minimally reduced by using an internal standard but mainly for ETP (endogenous thrombin potential). The between-subject variation is higher when using PPPlow than PPP and this variation is considerable higher than the analytical variation. TG has a rather high inherent analytical variation but considerable lower than the between-subject variation when using PPPlow as reagent.
Degradation of hydrocarbons in soil samples analyzed within accepted analytical holding times
International Nuclear Information System (INIS)
Jackson, J.; Thomey, N.; Dietlein, L.F.
1992-01-01
Samples which are collected in conjunction with subsurface investigations at leaking petroleum storage tank sites and petroleum refineries are routinely analyzed for benzene, toluene, ethylbenzene, xylenes (BTEX), and total petroleum hydrocarbons (TPH). Water samples are preserved by the addition of hydrochloric acid and maintained at four degrees centigrade prior to analysis. This is done to prevent bacterial degradation of hydrocarbons. Chemical preservation is not presently performed on soil samples. Instead, the samples are cooled and maintained at four degrees centigrade. This study was done to measure the degree of degradation of hydrocarbons in soil samples which are analyzed within accepted holding times. Soil samples were collected and representative subsamples were prepared from the initial sample. Subsamples were analyzed in triplicate for BTEX and TPH throughout the length of the approved holding times to measure the extent of sample constituent degradation prior to analysis. Findings imply that for sandy soils, BTEX and TPH concentrations can be highly dependent upon the length of time which elapses between sample collection and analysis
Mental health problems of aging and the aged from the viewpoint of analytical psychology*
Bash, K. W.
1959-01-01
According to Jung's analytical psychology man is either predominantly extravert or predominantly introvert. Whichever he is, he must in most cases, in order to satisfy the biological drives of the earlier part of his life, adapt himself to an extraverted culture and thus become largely extravert. In the later part of life, as biological involution sets in, this attitude and the values attached thereto no longer suffice. The strains set up by the resulting need for a reorientation in life are a fruitful source of mental disorder. PMID:20604058
Analytic Solution of the Electromagnetic Eigenvalues Problem in a Cylindrical Resonator
Energy Technology Data Exchange (ETDEWEB)
Checchin, Mattia [Fermilab; Martinello, Martina [Fermilab
2016-10-06
Resonant accelerating cavities are key components in modern particles accelerating facilities. These take advantage of electromagnetic fields resonating at microwave frequencies to accelerate charged particles. Particles gain finite energy at each passage through a cavity if in phase with the resonating field, reaching energies even of the order of $TeV$ when a cascade of accelerating resonators are present. In order to understand how a resonant accelerating cavity transfers energy to charged particles, it is important to determine how the electromagnetic modes are exited into such resonators. In this paper we present a complete analytical calculation of the resonating fields for a simple cylindrical-shaped cavity.
International Nuclear Information System (INIS)
Adachi, T.; Usuda, S.; Watanabe, K.
2001-01-01
Full text: In order to contribute to the strengthened safeguards system based on the Program 93+2 of the IAEA, Japan Atomic Energy Research Institute (JAERI) is developing analytical technology for ultra-trace amounts of nuclear materials in environmental samples, and constructed the CLEAR facility (Clean Laboratory for Environmental Analysis and Research) for this purpose. The development of the technology is carried out, at existing laboratories for time being, in the following fields: screening, bulk analysis and particle analysis. The screening aims at estimating the amounts of nuclear materials in environmental samples to be introduced into the clean rooms, and is the first step to avoid cross-contamination among the samples and contamination of the clean rooms themselves. In addition to ordinary radiation spectrometry, Compton suppression technique was applied to low energy γ- and X-ray measurements, and sufficient reduction in background level has been demonstrated. Another technique in examination is imaging-plate method, which is a kind of autoradiography and suitable for determination of radioactive-particle distribution in the samples as well as for semiquantitative determination. As for the bulk analysis, the efforts are temporally made on uranium in swipe samples. Preliminary examination for optimization of sample pre-treatment conditions is in progress. At present, ashing by low-temperature-plasma method gives better results than high-temperature ashing or acid leaching. For the isotopic ratio measurement, instrumental performance of inductively-coupled plasma mass spectrometry (ICP-MS) are mainly examined because sample preparation for ICP-MS is simpler than that for thermal ionization mass spectrometry (TIMS). It was found by our measurement that the swipe material (TexWipe TX304, usually used by IAEA) contains un-negligible uranium blank with large deviation (2-6 ng/sheet). This would introduce significant uncertainty in the trace analysis. JAERI
W. Ketter (Wolfgang); M. Peters (Markus); J. Collins (John); A. Gupta (Alok)
2015-01-01
textabstractWicked problems like sustainable energy and financial market stability are societal challenges that arise from complex socio-technical systems in which numerous social, economic, political, and technical factors interact. Understanding and mitigating them requires research methods that
Parent-reported feeding and feeding problems in a sample of Dutch toddlers
Moor, J.M.H. de; Didden, H.C.M.; Korzilius, H.P.L.M.
2007-01-01
Little is known about the feeding behaviors and problems with feeding in toddlers. In the present questionnaire study, data were collected on the feeding behaviors and feeding problems in a relatively large (n = 422) sample of Dutch healthy toddlers (i.e. 18-36 months old) who lived at home with
Integrated assessment of the global warming problem. A decision-analytical approach
International Nuclear Information System (INIS)
Van Lenthe, J.; Hendrickx, L.; Vlek, C.A.J.
1995-01-01
The project on the title subject aims at developing a policy-oriented methodology for the integrated assessment of the global warming problem. Decision analysis in general and influence diagrams in particular appear to constitute an appropriate integrated assessment methodology. The influence-diagram approach is illustrated at a preliminary integrated modeling of the global warming problem. In next stages of the research, attention will be shifted from the methodology of integrated assessment to the contents of integrated models. 4 figs., 5 refs
Orgovan, Norbert; Patko, Daniel; Hos, Csaba; Kurunczi, Sándor; Szabó, Bálint; Ramsden, Jeremy J; Horvath, Robert
2014-09-01
This paper gives an overview of the advantages and associated caveats of the most common sample handling methods in surface-sensitive chemical and biological sensing. We summarize the basic theoretical and practical considerations one faces when designing and assembling the fluidic part of the sensor devices. The influence of analyte size, the use of closed and flow-through cuvettes, the importance of flow rate, tubing length and diameter, bubble traps, pressure-driven pumping, cuvette dead volumes, and sample injection systems are all discussed. Typical application areas of particular arrangements are also highlighted, such as the monitoring of cellular adhesion, biomolecule adsorption-desorption and ligand-receptor affinity binding. Our work is a practical review in the sense that for every sample handling arrangement considered we present our own experimental data and critically review our experience with the given arrangement. In the experimental part we focus on sample handling in optical waveguide lightmode spectroscopy (OWLS) measurements, but the present study is equally applicable for other biosensing technologies in which an analyte in solution is captured at a surface and its presence is monitored. Explicit attention is given to features that are expected to play an increasingly decisive role in determining the reliability of (bio)chemical sensing measurements, such as analyte transport to the sensor surface; the distorting influence of dead volumes in the fluidic system; and the appropriate sample handling of cell suspensions (e.g. their quasi-simultaneous deposition). At the appropriate places, biological aspects closely related to fluidics (e.g. cellular mechanotransduction, competitive adsorption, blood flow in veins) are also discussed, particularly with regard to their models used in biosensing. Copyright © 2014 Elsevier B.V. All rights reserved.
ANALYTICAL RESULTS OF MOX COLEMANITE CONCRETE SAMPLES POURED AUGUST 29, 2012
Energy Technology Data Exchange (ETDEWEB)
Best, D.; Cozzi, A.; Reigel, M.
2012-12-20
The Mixed Oxide Fuel Fabrication Facility (MFFF) will use colemanite bearing concrete neutron absorber panels credited with attenuating neutron flux in the criticality design analyses and shielding operators from radiation. The Savannah River National Laboratory is tasked with measuring the total density, partial hydrogen density, and partial boron density of the colemanite concrete. Samples poured 8/29/12 were received on 9/20/2012 and analyzed. The average total density of each of the samples measured by the ASTM method C 642 was within the lower bound of 1.88 g/cm{sup 3}. The average partial hydrogen density of samples 8.6.1, 8.7.1, and 8.5.3 as measured using method ASTM E 1311 met the lower bound of 6.04E-02 g/cm{sup 3}. The average measured partial boron density of each sample met the lower bound of 1.65E-01 g/cm{sup 3} measured by the ASTM C 1301 method. The average partial hydrogen density of samples 8.5.1, 8.6.3, and 8.7.3 did not meet the lower bound. The samples, as received, were not wrapped in a moist towel as previous samples and appeared to be somewhat drier. This may explain the lower hydrogen partial density with respect to previous samples.
40 CFR 90.414 - Raw gaseous exhaust sampling and analytical system description.
2010-07-01
... probe may not be greater than 0.10 cm. The fitting that attaches the probe to the exhaust pipe must be... the different analyzers. (2) Heat the sample transport system from the engine exhaust pipe to the HC... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Raw gaseous exhaust sampling and...
40 CFR 89.412 - Raw gaseous exhaust sampling and analytical system description.
2010-07-01
... the exhaust pipe shall be as small as practical in order to minimize heat loss from the probe. (2) The... sample transport system from the engine exhaust pipe to the HC analyzer and the NOX analyzer must be... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Raw gaseous exhaust sampling and...
40 CFR 91.414 - Raw gaseous exhaust sampling and analytical system description.
2010-07-01
... shall not be greater than 0.10 cm. The fitting that attaches the probe to the exhaust pipe shall be as... internally to the different analyzers. (2) Heat the sample transport system from the engine exhaust pipe to... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Raw gaseous exhaust sampling and...
40 CFR 92.114 - Exhaust gas and particulate sampling and analytical system.
2010-07-01
... transport sample to analyzers. (I) Temperature sensor. A temperature sensor (T1) to measure the NO2 to NO... feet (1.22 m) from the exhaust duct. (iii) The sample transport system from the engine exhaust duct to.... (A) For diesel fueled and biodiesel fueled locomotives and engines, the wall temperature of the HC...
Stability of heparin blood samples during transport based on defined pre-analytical quality goals
DEFF Research Database (Denmark)
Jensen, Esther A; Stahl, Marta; Brandslund, Ivan
2008-01-01
BACKGROUND: In many countries and especially in Scandinavia, blood samples drawn in primary healthcare are sent to a hospital laboratory for analysis. The samples are exposed to various conditions regarding storage time, storage temperature and transport form. As these factors can have a severe...... impact on the quality of results, we wanted to study which combination of transport conditions could fulfil our pre-defined goals for maximum allowable error. METHODS: Samples from 406 patients from nine general practitioners (GPs) in two Danish counties were sent to two hospitals for analyses, during......, centrifuged and separated at the doctor's office within 45-60 min. This sample was considered as the best estimate of a comparison value. RESULTS: The pre-set quality goals were fulfilled for all the investigated components for samples transported to hospital by courier either as whole blood or as "on gel...
An analytical protocol for the determination of total mercury concentrations in solid peat samples
DEFF Research Database (Denmark)
Roos-Barraclough, F; Givelet, N; Martinez-Cortizas, A
2002-01-01
Traditional peat sample preparation methods such as drying at high temperatures and milling may be unsuitable for Hg concentration determination in peats due to the possible presence of volatile Hg species, which could be lost during drying. Here, the effects of sample preparation and natural.......12 and 8.52 ng kg(-1) h(-1), respectively). Fertilising the peat slightly increased Hg loss (3.08 ng kg(-1) h(-1) in NPK-fertilised peat compared to 0.28 ng kg(-1) h(-1) in unfertilised peat, when averaged over all temperatures used). Homogenising samples by grinding in a machine also caused a loss of Hg....... A comparison of two Hg profiles from an Arctic peat core, measured in frozen samples and in air-dried samples, revealed that no Hg losses occurred upon air-drying. A comparison of Hg concentrations in several plant species that make up peat, showed that some species (Pinus mugo, Sphagnum recurvum...
An analytical solution for the magneto-electro-elastic bimorph beam forced vibrations problem
International Nuclear Information System (INIS)
Milazzo, A; Orlando, C; Alaimo, A
2009-01-01
Based on the Timoshenko beam theory and on the assumption that the electric and magnetic fields can be treated as steady, since elastic waves propagate very slowly with respect to electromagnetic ones, a general analytical solution for the transient analysis of a magneto-electro-elastic bimorph beam is obtained. General magneto-electric boundary conditions can be applied on the top and bottom surfaces of the beam, allowing us to study the response of the bilayer structure to electromagnetic stimuli. The model reveals that the magneto-electric loads enter the solution as an equivalent external bending moment per unit length and as time-dependent mechanical boundary conditions through the definition of the bending moment. Moreover, the influences of the electro-mechanic, magneto-mechanic and electromagnetic coupling on the stiffness of the bimorph stem from the computation of the beam equivalent stiffness constants. Free and forced vibration analyses of both multiphase and laminated magneto-electro-elastic composite beams are carried out to check the effectiveness and reliability of the proposed analytic solution
International Nuclear Information System (INIS)
He Tao; Su Bingjing
2011-01-01
Highlights: → The performance of the MCNP differential operator perturbation technique is compared with that of the MCNP correlated sampling method for three types of fixed-source problems. → In terms of precision, the MCNP perturbation technique outperforms correlated sampling for one type of problem but performs comparably with or even under-performs correlated sampling for the other two types of problems. → In terms of accuracy, the MCNP perturbation calculations may predict inaccurate results for some of the test problems. However, the accuracy can be improved if the midpoint correction technique is used. - Abstract: Correlated sampling and the differential operator perturbation technique are two methods that enable MCNP (Monte Carlo N-Particle) to simulate small response change between an original system and a perturbed system. In this work the performance of the MCNP differential operator perturbation technique is compared with that of the MCNP correlated sampling method for three types of fixed-source problems. In terms of precision of predicted response changes, the MCNP perturbation technique outperforms correlated sampling for the problem involving variation of nuclide concentrations in the same direction but performs comparably with or even underperforms correlated sampling for the other two types of problems that involve void or variation of nuclide concentrations in opposite directions. In terms of accuracy, the MCNP differential operator perturbation calculations may predict inaccurate results that deviate from the benchmarks well beyond their uncertainty ranges for some of the test problems. However, the accuracy of the MCNP differential operator perturbation can be improved if the midpoint correction technique is used.
International Nuclear Information System (INIS)
Estevez Alvarez, J.R.; Aguiar Lambert, D.; Montero Alvarez, A.; Pupo Gonzalez, I.; Padilla Alvarez, R.; Gonzalez Garcia, H.; Ramirez Sasco, M.
1998-01-01
In the present work the contents of Al; K; Ca; Mn; Fe; Ni; Cu; Zn; Sr; Cd and Pb in red mangroves (Rhizophora mangle) from different Cuban regions are determined, using Energy Dispersive X-Ray Fluorescence (Emission-Transmission (Et) and I/C methods), Atomic Absorption Spectrophotometry (AAS), and Polarography (Anodic Stripping Voltametry method). Biological Certified Reference Materials (CRM) are employed for the tracing of the tracing of the curves of the relative I/C method and for the evaluation of the analytical results accuracy. The reliability of the results is also checked by statistical means. Standard deviations and the detection limits of each method are reported. Finally, the obtained values for the concentration of the different elements in each studied ecosystem are presented; a detailed discussion about their significance will be performed in a further paper
Chance constrained problems: penalty reformulation and performance of sample approximation technique
Czech Academy of Sciences Publication Activity Database
Branda, Martin
2012-01-01
Roč. 48, č. 1 (2012), s. 105-122 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional research plan: CEZ:AV0Z10750506 Keywords : chance constrained problems * penalty functions * asymptotic equivalence * sample approximation technique * investment problem Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.619, year: 2012 http://library.utia.cas.cz/separaty/2012/E/branda-chance constrained problems penalty reformulation and performance of sample approximation technique.pdf
Application of bar codes to the automation of analytical sample data collection
International Nuclear Information System (INIS)
Jurgensen, H.A.
1986-01-01
The Health Protection Department at the Savannah River Plant collects 500 urine samples per day for tritium analyses. Prior to automation, all sample information was compiled manually. Bar code technology was chosen for automating this program because it provides a more accurate, efficient, and inexpensive method for data entry. The system has three major functions: sample labeling is accomplished at remote bar code label stations composed of an Intermec 8220 (Intermec Corp.) interfaced to an IBM-PC, data collection is done on a central VAX 11/730 (Digital Equipment Corp.). Bar code readers are used to log-in samples to be analyzed on liquid scintillation counters. The VAX 11/730 processes the data and generates reports, data storage is on the VAX 11/730 and backed up on the plant's central computer. A brief description of several other bar code applications at the Savannah River Plant is also presented
40 CFR 89.421 - Exhaust gas analytical system; CVS bag sample.
2010-07-01
... the measurement of carbon monoxide and carbon dioxide, and a chemiluminescence detector (CLD) (or HCLD... following requirements: (1) The CLD (or HCLD) requires that the nitrogen dioxide present in the sample be...
40 CFR 91.423 - Exhaust gas analytical system; CVS grab sample.
2010-07-01
... carbon dioxide, and a chemiluminescence detector (CLD) (or heated CLD (HCLD)) for the measurement of...) The CLD (or HCLD) requires that the nitrogen dioxide present in the sample be converted to nitric...
40 CFR 90.423 - Exhaust gas analytical system; CVS grab sample.
2010-07-01
... measurement of carbon monoxide and carbon dioxide, and a chemiluminescence detector (CLD) (or heated CLD (HCLD... following requirements: (1) The CLD (or HCLD) requires that the nitrogen dioxide present in the sample be...
On the problems of PPS sampling in multi-character surveys ...
African Journals Online (AJOL)
This paper, which is on the problems of PPS sampling in multi-character surveys, compares the efficiency of some estimators used in PPSWR sampling for multiple characteristics. From a superpopulation model, we computed the expected variances of the different estimators for each of the first two finite populations ...
International Nuclear Information System (INIS)
Schauenburg, H.; Weigert, P.
1992-01-01
Using solid sampling with graphite furnace atomic absorption spectrometry (GFAAS), values for cadmium, copper, lead and zinc in six biological reference materials were obtained from up to four laboratories participating in three collaborative studies. These results are compared with those obtained with other methods used in routine analysis from laboratories of official food control. Under certain conditions solid sampling with GFAAS seems to be suitable for routine analysis as well as conventional methods. (orig.)
An analytical method for the determination of plutonium in autopsy samples
International Nuclear Information System (INIS)
Santori, G.
1983-01-01
A sensitive method for the determination of plutonium in autopsy samples is described. After a suitable chemical pretreatment of the samples the plutonium is separated by extraction chromatography with tri-n-octylphosphine oxide (TOPO) supported on microporus polyethylene. After electrodeposition of plutonium the activity is counted by alpha spectroscopy. The global yield was 75-80%. The reagent blank activity was such to allow the determination of some femtocuries of plutonium
Marine anthropogenic radiotracers in the Southern Hemisphere: New sampling and analytical strategies
DEFF Research Database (Denmark)
Levy, I.; Povinec, P.P.; Aoyama, M.
2011-01-01
The Japan Agency for Marine Earth Science and Technology conducted in 2003–2004 the Blue Earth Global Expedition (BEAGLE2003) around the Southern Hemisphere Oceans, which was a rare opportunity to collect many seawater samples for anthropogenic radionuclide studies. We describe here sampling...... showed a reasonable agreement between the participating laboratories. The obtained data on the distribution of 137Cs and plutonium isotopes in seawater represent the most comprehensive results available for the Southern Hemisphere Oceans....
Analytic solution of boundary-value problems for nonstationary model kinetic equations
International Nuclear Information System (INIS)
Latyshev, A.V.; Yushkanov, A.A.
1993-01-01
A theory for constructing the solutions of boundary-value problems for non-stationary model kinetic equations is constructed. This theory was incorrectly presented equation, separation of the variables is used, this leading to a characteristic equation. Eigenfunctions are found in the space of generalized functions, and the eigenvalue spectrum is investigated. An existence and uniqueness theorem for the expansion of the Laplace transform of the solution with respect to the eigenfunctions is proved. The proof is constructive and gives explicit expressions for the expansion coefficients. An application to the Rayleigh problem is obtained, and the corresponding result of Cercignani is corrected
Directory of Open Access Journals (Sweden)
Kravets Victor V.
2016-05-01
Full Text Available One-dimensional dynamic design of a component characterized by inertia coefficient, elastic coefficient, and coefficient of energy dispersion. The component is affected by external action in the form of time-independent initial kinematic disturbances and varying ones. Mathematical model of component dynamics as well as a new form of analytical representation of transient in terms of one-dimensional problem of kinematic effect is provided. Dynamic design of a component is being carried out according to a theory of modal control.
Analytic properties of the partial amplitudes in the three-body problem
International Nuclear Information System (INIS)
Blokhintsev, L.D.; Simonov, Yu.A.
1978-01-01
The singularities of the partial waves for a multiple scattering series of a system of three nonrelativistic particles with arbitrary masses are studied. Both on-shell and off-shell diagrams are considered for binary processes (2→2), break-up processes (2→3) and also three-particle scattering (3→3). For any diagram there are obtained: 1) simple analytic expressions for the loci of the nonpotential singularities of the partial amplitudes, i.e. for the singularities independent of the form of interaction between particles; 2) necessary and sufficient conditions of an existence of these singularities on the physical sheet; 3) a character of the singularities. A criterium is found for the kinematic possibility of a process of N subsequent pair collisions in a classical system of three point like masses with the contact interaction the maximal number Nsub(max) of these collisions is deduced
Solved problems in classical mechanics analytical and numerical solutions with comments
de Lange, O L
2010-01-01
Apart from an introductory chapter giving a brief summary of Newtonian and Lagrangian mechanics, this book consists entirely of questions and solutions on topics in classical mechanics that will be encountered in undergraduate and graduate courses. These include one-, two-, and three- dimensional motion; linear and nonlinear oscillations; energy, potentials, momentum, and angular momentum; spherically symmetric potentials; multi-particle systems; rigid bodies; translation androtation of the reference frame; the relativity principle and some of its consequences. The solutions are followed by a set of comments intended to stimulate inductive reasoning and provide additional information of interest. Both analytical and numerical (computer) techniques are used to obtain andanalyze solutions. The computer calculations use Mathematica (version 7), and the relevant code is given in the text. It includes use of the interactive Manipulate function which enables one to observe simulated motion on a computer screen, and...
International Nuclear Information System (INIS)
Chen, C.Y.; Gao, Y.X.; Li, B.; Yu, H.W.; Li, Y.F.; Sun, J.; Chai, Z.F.
2005-01-01
In the past, most analytical problems relating to biological systems were addressed by measuring the total concentrations of elements. Now there is increasing interest of the importance of their chemical forms, in which an element is present in biological systems, e.g., the oxidation state, the binding state with macromolecules, or even the molecular structure. The biological effects of chromium, which is classified as an essential nutrient, are dependent upon its oxidation. state. In general, trivalent chromium is biochemically active, whereas hexavalent chromium is considered to be toxic. Mercury is one of serious environmental persistent pollutants. However, organic forms of mercury are known to possess much higher toxicity than inorganic mercury. Therefore, information on speciation is critically required in order to better understanding of their bioavailability, metabolism, transformation, and toxicity in vivo. Recently, chemical speciation of selenium, mercury, copper, zinc, iron, and so on, has been investigated by INAA, ICP-MS, XRF, EXAFS and related techniques combined with chemical and biochemical separation (extraction, chromatography, gel electrophoresis, etc.). INAA, XRF, and ICP-MS have superior advantages in aspect of multielemental analysis with high accuracy and sensitivity, which render the possibility of analyzing various elements of interest simultaneously. These offline or online techniques have been flexibly applied to different biological matrixes, such as human hair, serum, urine, various tissues and organs in our researches. In addition, EXAFS provides structural information about the moiety of metal centers up to a distance of approximately 4-5 Anstrom. For instance, hepatocellular carcinoma (HCC) is one of the most common cancers worldwide. Imbalance of elements, such as Se, Zn, Fe, Cu, Cd, Ca, etc., has been found in the whole blood or serum of patients with HCC. We found that the profiles of Se, Cd, Fe, Zn and Cu-containing proteins
ON SAMPLING BASED METHODS FOR THE DUBINS TRAVELING SALESMAN PROBLEM WITH NEIGHBORHOODS
Directory of Open Access Journals (Sweden)
Petr Váňa
2015-12-01
Full Text Available In this paper, we address the problem of path planning to visit a set of regions by Dubins vehicle, which is also known as the Dubins Traveling Salesman Problem Neighborhoods (DTSPN. We propose a modification of the existing sampling-based approach to determine increasing number of samples per goal region and thus improve the solution quality if a more computational time is available. The proposed modification of the sampling-based algorithm has been compared with performance of existing approaches for the DTSPN and results of the quality of the found solutions and the required computational time are presented in the paper.
Provenance validation of polished rice samples using nuclear and isotopic analytical techniques
International Nuclear Information System (INIS)
Pabroa, P.C.B.; Sucgang, R.J.; Mendoza, N.D.S.; Ebihara, M.; Peña, M.
2015-01-01
Rice (Oryza sativa) has been considered the best staple food among all cereals and is the staple food for over 3 billion people, constituting over half of the world’s population. Elemental and isotopic analysis revealed variance between Philippine and Japanese rice. Rice samples collected in Japan and in the Philippines (market survey samples from Metro Manila, and farm harvests from Aklan province and Central Luzon) were washed, dried and ground to fine powder. Elemental analyses of the samples were carried out using instrumental neutron activation analysis (INAA) while isotopic signatures of the samples were determined using the isotope ratio mass spectrometry (IRMS). Results show that compared with the unpolished rice standard NIES CRM10b, the polished Japanese and Philippine rice sampled show reduced concentrations of elements by as much as 1/10. 1/4 , 1/5 and 1/3 for Mg, Mn, K and Na, respectively. Levels of Ca and Zn are not greatly affected. Arsenic, probably introduced from fertilizers used in rice fields is found in all the Japanese rice tested at an average concentration of 0.103 μg/g and three out of four of the Philippine rice at an average concentration of 0.70μg/g. Higher levels of Br seen in two of the Philippine rice at 14 and 34μg/g indicated probable contamination source from the pesticide methyl bromide during quarantine. Good correlation of isotopic signatures with geographical location of polished, but not for unpolished, rice samples from Central Luzon and Aklan indicated that provenance studies are best done on polished rice samples. Isotopic with of ω’”13C show signature that of a C3 plant with possible narrow distinguishable signature with Japanese rice falling within -27.5 to -28.5 while Philippine rice within -29 to -30. Rice provenance can be ascertained using elemental analysis and isotopic abundance determination as shown by the study.(author)
Using Analytics to Transform a Problem-Based Case Library: An Educational Design Research Approach
Schmidt, Matthew; Tawfik, Andrew A.
2018-01-01
This article describes the iterative design, development, and evaluation of a case-based learning environment focusing on an ill-structured sales management problem. We discuss our processes and situate them within the broader framework of educational design research. The learning environment evolved over the course of three design phases. A…
Durand, V. Mark; Merges, Eileen
2001-01-01
This article describes functional communication training (FCT) with students who have autism. FCT involves teaching alternative communication strategies to replace problem behaviors. The article reviews the conditions under which this intervention is successful and compares the method with other behavioral approaches. It concludes that functional…
Educational reform as a dynamic system of problems and solutions: Towards an analytic instrument
Luttenberg, J.; Carpay, T.; Veugelers, W.
2013-01-01
Large-scale educational reforms are difficult to realize and often fail. In the literature, the course of reform and problems associated with this are frequently discussed. The explanations and recommendations then provided are so diverse that it is difficult to gain a comprehensive overview of what
Martini, Valeria; Bernardi, Serena; Marelli, Priscilla; Cozzi, Marzia; Comazzi, Stefano
2018-06-01
Objectives Flow cytometry (FC) is becoming increasingly popular among veterinary oncologists for the diagnosis of lymphoma or leukaemia. It is accurate, fast and minimally invasive. Several studies of FC have been carried out in canine oncology and applied with great results, whereas there is limited knowledge and use of this technique in feline patients. This is mainly owing to the high prevalence of intra-abdominal lymphomas in this species and the difficulty associated with the diagnostic procedures needed to collect the sample. The purpose of the present study is to investigate whether any pre-analytical factor might affect the quality of suspected feline lymphoma samples for FC analysis. Methods Ninety-seven consecutive samples of suspected feline lymphoma were retrospectively selected from the authors' institution's FC database. The referring veterinarians were contacted and interviewed about several different variables, including signalment, appearance of the lesion, features of the sampling procedure and the experience of veterinarians performing the sampling. Statistical analyses were performed to assess the possible influence of these variables on the cellularity of the samples and the likelihood of it being finally processed for FC. Results Sample cellularity is a major factor in the likelihood of the sample being processed. Moreover, sample cellularity was significantly influenced by the needle size, with 21 G needles providing the highest cellularity. Notably, the sample cellularity and the likelihood of being processed did not vary between peripheral and intra-abdominal lesions. Approximately half of the cats required pharmacological restraint. Side effects were reported in one case only (transient swelling after peripheral lymph node sampling). Conclusions and relevance FC can be safely applied to cases of suspected feline lymphomas, including intra-abdominal lesions. A 21 G needle should be preferred for sampling. This study provides the basis for
International Nuclear Information System (INIS)
Al-Sarraj, Ziyad Shihab; Damboos, Hassan I; Roumie, Mohamad
2012-01-01
The present work aimed at investigating the compositions and microstructures of some archaeological samples which dated back to various periods of the ancient Iraqi civilizations using PIXE, XRF, XRD, and SEM techniques. The models selected for the study (ceramics, glaze, etc.) were diverse in size and nature, therefore a limited number of samples were then butted from them by a small diamond wheel. Conventional powder metallurgy method was then used to prepare the samples. Dried samples were then coated with a thin layer of carbon, and analyzed using the ion beam accelerator of the LAEC. Three other groups of samples were also prepared for the purpose of analysis by X-ray fluorescence (XRF), X-ray diffraction (XRD), and scanning electron microscope (SEM). Analysis results of the chemical composition showed good agreement between the various techniques as well as for phases, while the fine structure analysis obtained by optical and scanning microscopy exhibited features of a structure where it got an intensified densification in the final stage of sintering and accompanied by quasi-homogeneous distribution of the closed pores. This will lead to the conclusion that the temperature used for sintering by ancient Iraqi was sufficient and it may fall in the range between 950-1200°C, also the mixes and the forming methods used by them, were both suitable to obtain good sintered bodies with even distribution of pores. A ring-shaped trace noticed in SEM micrographs need more work and study to explain what it is?
Analytical Methods for Cs-137 and Other Radionuclides in Solvent Samples
International Nuclear Information System (INIS)
Pennebaker, F.M.
2002-01-01
Accurate characterization of individual waste components is critical to ensure design and operation of effective treatment processes and compliance with waste acceptance criteria. Current elemental analysis of organic matrices consists of conversion of the organic sample to aqueous by digesting the sample, which is inadequate in many cases. Direct analysis of the organic would increase sensitivity and decrease contamination and analysis time. For this project, we evaluated an Aridus membrane-desolvation sample introduction system for the direct analysis of organic solvents by Inductively Coupled Plasma - Mass Spectrometry (ICP-MS). The desolvator-ICP-MS successfully analyzed solvent from the caustic-side solvent extraction (CSSX) process and tri-butyl phosphate (TBP) organic tank waste from F-canyon for a variety of elements. Detection limits for most elements were determined in the part per trillion (ppt) range. This technology should increase accuracy in support of SRTC activities involving CSSX and other site processes involving organic compounds
International Nuclear Information System (INIS)
Allagi, Mabruk O.; Lewins, Jeffery D.
1999-01-01
In a further study of virtually processed Monte Carlo estimates in neutron transport, a shielding problem has been studied. The use of virtual sampling to estimate the importance function at a certain point in the phase space depends on the presence of neutrons from the real source at that point. But in deep penetration problems, not many neutrons will reach regions far away from the source. In order to overcome this problem, two suggestions are considered: (1) virtual sampling is used as far as the real neutrons can reach, then fictitious sampling is introduced for the remaining regions, distributed in all the regions, or (2) only one fictitious source is placed where the real neutrons almost terminate and then virtual sampling is used in the same way as for the real source. Variational processing is again found to improve the Monte Carlo estimates, being best when using one fictitious source in the far regions with virtual sampling (option 2). When fictitious sources are used to estimate the importances in regions far away from the source, some optimization has to be performed for the proportion of fictitious to real sources, weighted against accuracy and computational costs. It has been found in this study that the optimum number of cells to be treated by fictitious sampling is problem dependent, but as a rule of thumb, fictitious sampling should be employed in regions where the number of neutrons from the real source fall below a specified limit for good statistics
The analytical solution of the problem of a shock focusing in a gas for one-dimensional case
Shestakovskaya, E. S.; Magazov, F. G.
2018-03-01
The analytical solution of the problem of an imploding shock wave in the vessel with an impermeable wall is constructed for the cases of planar, cylindrical and spherical symmetry. The negative velocity is set at the vessel boundary. The velocity of cold ideal gas is zero. At the initial time the shock spreads from this point into the center of symmetry. The boundary moves under the particular law which conforms to the movement of the shock. In Euler variables it moves but in Lagrangian variables its trajectory is a vertical line. Equations that determine the structure of the gas flow between the shock front and the boundary as a function of time and the Lagrangian coordinate as well as the dependence of the entropy on the shock wave velocity are obtained. Self-similar coefficients and corresponding critical values of self-similar coordinates were found for a wide range of adiabatic index. The problem is solved for Lagrangian coordinates.
Fayolle, Guy; Malyshev, Vadim
2017-01-01
This monograph aims to promote original mathematical methods to determine the invariant measure of two-dimensional random walks in domains with boundaries. Such processes arise in numerous applications and are of interest in several areas of mathematical research, such as Stochastic Networks, Analytic Combinatorics, and Quantum Physics. This second edition consists of two parts. Part I is a revised upgrade of the first edition (1999), with additional recent results on the group of a random walk. The theoretical approach given therein has been developed by the authors since the early 1970s. By using Complex Function Theory, Boundary Value Problems, Riemann Surfaces, and Galois Theory, completely new methods are proposed for solving functional equations of two complex variables, which can also be applied to characterize the Transient Behavior of the walks, as well as to find explicit solutions to the one-dimensional Quantum Three-Body Problem, or to tackle a new class of Integrable Systems. Part II borrows spec...
Directory of Open Access Journals (Sweden)
Serena Bernardi
2017-05-01
Full Text Available Introduction Flow cytometry (FC is an increasingly required technique on which veterinary oncologists rely to have an accurate, fast, minimally invasive lymphoma or leukemia diagnosis. FC has been studied and applied with great results in canine oncology, whereas in feline oncology the use of this technique is still to be experienced. This is mainly due to a supposed discomfort in sampling, because of the high prevalence of intra-abdominal lymphomas. The purpose of the present study is to investigate whether any pre-analytical factor might affect the quality of suspected feline lymphoma samples for FC analysis. Methods 97 consecutive samples of suspected feline lymphoma were retrospectively selected from the authors’ institution FC database. The referring veterinarians were recalled and interrogated about several different variables, including signalling, features of the lesion, features of the sampling procedure and the experience of veterinarians performing the sampling. Statistical analyses were performed to assess the possible influence of these variables on the cellularity of the samples and the likelihood of being finally processed for FC. Results None of the investigated variables significantly influenced the quality of the submitted samples, but the needle size, with 21G needles providing the highest cellularity (Table 1. Notably, the samples quality did not vary between peripheral and intra-abdominal lesions. Sample cellularity alone influenced the likelihood of being processed. About a half of the cats required pharmacological restraint. Side effects were reported in one case only (transient swelling after peripheral lymph node sampling. Conclusions FC can be safely applied to cases of suspected feline lymphomas, even for intra-abdominal lesions. 21G needle should be preferred for sampling. This study provides the bases for the spread of this minimally invasive, fast and cost-effective technique in feline medicine.
International Nuclear Information System (INIS)
Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim
2014-01-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems
Energy Technology Data Exchange (ETDEWEB)
Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)
2014-02-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.
40 CFR 86.1310-90 - Exhaust gas sampling and analytical system; diesel engines.
2010-07-01
... avoid moisture condensation. A filter pair loading of 1 mg is typically proportional to a 0.1 g/bhp-hr..., the temperatures where condensation of water in the exhaust gases could occur. This may be achieved by... sampling zone in the primary dilution tunnel and as required to prevent condensation at any point in the...
In vitro neutral detergent fiber (NDF) digestibility (NDFD) is an empirical measurement used to describe fermentability of NDF by rumen microbes. Variability is inherent in assays and affects the precision that can be expected for replicated samples. The study objective was to evaluate variability w...
40 CFR 91.421 - Dilute gaseous exhaust sampling and analytical system description.
2010-07-01
... Pump—Constant Volume Sampler (PDP-CVS) system with a heat exchanger, or a Critical Flow Venturi... gas mixture temperature, measured at a point immediately ahead of the critical flow venturi, must be.... (a) General. The exhaust gas sampling system described in this section is designed to measure the...
Multiplicity and contiguity of ablation mechanisms in laser-assisted analytical micro-sampling
International Nuclear Information System (INIS)
Bleiner, Davide; Bogaerts, Annemie
2006-01-01
Laser ablation is implemented in several scientific and technological fields, as well as a rapid sample introduction technique in elemental and trace analysis. At high laser fluence, the ejection of micro-sized droplets causes the enhancement of the surface recession speed and depth resolution degradation as well as the alteration of the sampling stoichiometry. The origin of such large particles seems to be due to at least two different processes, phase explosion and melt splashing. Experimental evidence for both was found in metallic matrices, whereas non-metallic samples showed more complex phenomena like cracking. The spatial distribution of the beam energy profile is responsible for significant differences in the ablation mechanism across the irradiated region and for heterogeneous sampling. Under Gaussian irradiance distribution, the center of the crater, where the irradiance is the highest, experienced a fast heating with rapid ejection of a mixture of particles and vapor (spinodal breakdown). The crater periphery was subjected to more modest irradiation, with melt mobilization and walls formation. The overall resulting particle size distribution was composed of an abundant nano-sized fraction, produced by vapor condensation, and a micro-sized fraction during melt expulsion
Isotope analytics for the evaluation of the feeding influence on the isotope ratio in beef samples
International Nuclear Information System (INIS)
Herwig, Nadine
2010-01-01
Information about the origin of food and associated production systems has a high significance for food control. An extremely promising approach to obtain such information is the determination of isotope ratios of different elements. In this study the correlation of the isotope ratios C-13/C-12, N-15/N-14, Mg-25/Mg-24, and Sr-87/Sr-86 in bovine samples (milk and urine) and the corresponding isotope ratios in feed was investigated. It was shown that in the bovine samples all four isotope ratios correlate with the isotope composition of the feed. The isotope ratios of strontium and magnesium have the advantage that they directly reflect the isotope ratios of the ingested feed since there is no isotope fractionation in the bovine organism which is in contrast to the case of carbon and nitrogen isotope ratios. From the present feeding study it is evident, that a feed change leads to a significant change in the delta C-13 values in milk and urine within 10 days already. For the deltaN-15 values the feed change was only visible in the bovine urine after 49 days. Investigations of cows from two different regions (Berlin/Germany and Goestling/Austria) kept at different feeding regimes revealed no differences in the N-15/N-14 and Mg-26/Mg-24 isotope ratios. The strongest correlation between the isotope ratio of the bovine samples and the kind of ingested feed was observed for the carbon isotope ratio. With this ratio even smallest differences in the feed composition were traceable in the bovine samples. Since different regions usually coincide with different feeding regimes, carbon isotope ratios can be used to distinguish bovine samples from different regions if the delta C-13 values of the ingested feed are different. Furthermore, the determination of strontium isotope ratios revealed significant differences between bovine and feed samples of Berlin and Goestling due to the different geologic realities. Hence the carbon and strontium isotope ratios allow the best
Analytical solution of the problem of a shock wave in the collapsing gas in Lagrangian coordinates
Kuropatenko, V. F.; Shestakovskaya, E. S.
2016-10-01
It is proposed the exact solution of the problem of a convergent shock wave and gas dynamic compression in a spherical vessel with an impermeable wall in Lagrangian coordinates. At the initial time the speed of cold ideal gas is equal to zero, and a negative velocity is set on boundary of the sphere. When t > t0 the shock wave spreads from this point into the gas. The boundary of the sphere will move under the certain law correlated with the motion of the shock wave. The trajectories of the gas particles in Lagrangian coordinates are straight lines. The equations determining the structure of the gas flow between the shock front and gas border have been found as a function of time and Lagrangian coordinate. The dependence of the entropy on the velocity of the shock wave has been found too. For Lagrangian coordinates the problem is first solved. It is fundamentally different from previously known formulations of the problem of the self-convergence of the self-similar shock wave to the center of symmetry and its reflection from the center, which was built up for the infinite area in Euler coordinates.
Hamed, Haikel Ben; Bennacer, Rachid
2008-08-01
This work consists in evaluating algebraically and numerically the influence of a disturbance on the spectral values of a diagonalizable matrix. Thus, two approaches will be possible; to use the theorem of disturbances of a matrix depending on a parameter, due to Lidskii and primarily based on the structure of Jordan of the no disturbed matrix. The second approach consists in factorizing the matrix system, and then carrying out a numerical calculation of the roots of the disturbances matrix characteristic polynomial. This problem can be a standard model in the equations of the continuous media mechanics. During this work, we chose to use the second approach and in order to illustrate the application, we choose the Rayleigh-Bénard problem in Darcy media, disturbed by a filtering through flow. The matrix form of the problem is calculated starting from a linear stability analysis by a finite elements method. We show that it is possible to break up the general phenomenon into other elementary ones described respectively by a disturbed matrix and a disturbance. A good agreement between the two methods was seen. To cite this article: H.B. Hamed, R. Bennacer, C. R. Mecanique 336 (2008).
Cantrell, John H., Jr.; Cantrell, Sean A.
2008-01-01
A comprehensive analytical model of the interaction of the cantilever tip of the atomic force microscope (AFM) with the sample surface is developed that accounts for the nonlinearity of the tip-surface interaction force. The interaction is modeled as a nonlinear spring coupled at opposite ends to linear springs representing cantilever and sample surface oscillators. The model leads to a pair of coupled nonlinear differential equations that are solved analytically using a standard iteration procedure. Solutions are obtained for the phase and amplitude signals generated by various acoustic-atomic force microscope (A-AFM) techniques including force modulation microscopy, atomic force acoustic microscopy, ultrasonic force microscopy, heterodyne force microscopy, resonant difference-frequency atomic force ultrasonic microscopy (RDF-AFUM), and the commonly used intermittent contact mode (TappingMode) generally available on AFMs. The solutions are used to obtain a quantitative measure of image contrast resulting from variations in the Young modulus of the sample for the amplitude and phase images generated by the A-AFM techniques. Application of the model to RDF-AFUM and intermittent soft contact phase images of LaRC-cp2 polyimide polymer is discussed. The model predicts variations in the Young modulus of the material of 24 percent from the RDF-AFUM image and 18 percent from the intermittent soft contact image. Both predictions are in good agreement with the literature value of 21 percent obtained from independent, macroscopic measurements of sheet polymer material.
Fukushima, Romualdo S; Hatfield, Ronald D
2004-06-16
Present analytical methods to quantify lignin in herbaceous plants are not totally satisfactory. A spectrophotometric method, acetyl bromide soluble lignin (ABSL), has been employed to determine lignin concentration in a range of plant materials. In this work, lignin extracted with acidic dioxane was used to develop standard curves and to calculate the derived linear regression equation (slope equals absorptivity value or extinction coefficient) for determining the lignin concentration of respective cell wall samples. This procedure yielded lignin values that were different from those obtained with Klason lignin, acid detergent acid insoluble lignin, or permanganate lignin procedures. Correlations with in vitro dry matter or cell wall digestibility of samples were highest with data from the spectrophotometric technique. The ABSL method employing as standard lignin extracted with acidic dioxane has the potential to be employed as an analytical method to determine lignin concentration in a range of forage materials. It may be useful in developing a quick and easy method to predict in vitro digestibility on the basis of the total lignin content of a sample.
Yoon, Heojeong; Woo, Ae Ja; Treagust, David; Chandrasegaran, AL
2014-01-01
The efficacy of problem-based learning (PBL) in an analytical chemistry laboratory course was studied using a programme that was designed and implemented with 20 students in a treatment group over 10 weeks. Data from 26 students in a traditional analytical chemistry laboratory course were used for comparison. Differences in the creative thinking ability of students in both the treatment and control groups were evaluated before and at the end of the implementation of the programme, using the Torrance Tests of Creative Thinking. In addition, changes in students' self-regulated learning skills using the Self-Regulated Learning Interview Schedule (SRLIS) and their self-evaluation proficiency were evaluated. Analysis of covariance showed that the creative thinking ability of the treatment group had improved statistically significantly after the PBL course (p effect on creative thinking ability. The SRLIS test showed that students in the treatment group used self-regulated learning strategies more frequently than students in the comparison group. According to the results of the self-evaluation, students became more positive and confident in problem-solving and group work as the semester progressed. Overall, PBL was shown to be an effective pedagogical instructional strategy for enhancing chemistry students' creative thinking ability, self-regulated learning skills and self-evaluation.
Matrix effects break the LC behavior rule for analytes in LC-MS/MS analysis of biological samples.
Fang, Nianbai; Yu, Shanggong; Ronis, Martin Jj; Badger, Thomas M
2015-04-01
High-performance liquid chromatography (HPLC) and liquid chromatography-tandem mass spectrometry (LC-MS/MS) are generally accepted as the preferred techniques for detecting and quantitating analytes of interest in biological matrices on the basis of the rule that one chemical compound yields one LC-peak with reliable retention time (Rt.). However, in the current study, we have found that under the same LC-MS conditions, the Rt. and shape of LC-peaks of bile acids in urine samples from animals fed dissimilar diets differed significantly among each other. To verify this matrix effect, 17 authentic bile acid standards were dissolved in pure methanol or in methanol containing extracts of urine from pigs consuming either breast milk or infant formula and analyzed by LC-MS/MS. The matrix components in urine from piglets fed formula significantly reduced the LC-peak Rt. and areas of bile acids. This is the first characterization of this matrix effect on Rt. in the literature. Moreover, the matrix effect resulted in an unexpected LC behavior: one single compound yielded two LC-peaks, which broke the rule of one LC-peak for one compound. The three bile acid standards which exhibited this unconventional LC behavior were chenodeoxycholic acid, deoxycholic acid, and glycocholic acid. One possible explanation for this effect is that some matrix components may have loosely bonded to analytes, which changed the time analytes were retained on a chromatography column and interfered with the ionization of analytes in the MS ion source to alter the peak area. This study indicates that a comprehensive understanding of matrix effects is needed towards improving the use of HPLC and LC-MS/MS techniques for qualitative and quantitative analyses of analytes in pharmacokinetics, proteomics/metabolomics, drug development, and sports drug testing, especially when LC-MS/MS data are analyzed by automation software where identification of an analyte is based on its exact molecular weight and Rt
International Nuclear Information System (INIS)
Click, D; Tommy Edwards, T; Henry Ajo, H
2008-01-01
For each sludge batch that is processed in the Defense Waste Processing Facility (DWPF), the Savannah River National Laboratory (SRNL) performs confirmation of the applicability of the digestion method to be used by the DWPF lab for elemental analysis of Sludge Receipt and Adjustment Tank (SRAT) receipt samples and SRAT product process control samples. DWPF SRAT samples are typically dissolved using a room temperature HF-HNO3 acid dissolution (i.e., DWPF Cold Chem Method, see Procedure SW4-15.201) and then analyzed by inductively coupled plasma - atomic emission spectroscopy (ICP-AES). This report contains the results and comparison of data generated from performing the Aqua Regia (AR), Sodium Peroxide/Hydroxide Fusion (PF) and DWPF Cold Chem (CC) method digestion of Sludge Batch 5 (SB5) SRAT Receipt and SB5 SRAT Product samples. The SB5 SRAT Receipt and SB5 SRAT Product samples were prepared in the SRNL Shielded Cells, and the SRAT Receipt material is representative of the sludge that constitutes the SB5 Batch composition. This is the sludge in Tank 51 that is to be transferred into Tank 40, which will contain the heel of Sludge Batch 4 (SB4), to form the SB5 Blend composition. The results for any one particular element should not be used in any way to identify the form or speciation of a particular element in the sludge or used to estimate ratios of compounds in the sludge. A statistical comparison of the data validates the use of the DWPF CC method for SB5 Batch composition. However, the difficulty that was encountered in using the CC method for SB4 brings into question the adequacy of CC for the SB5 Blend. Also, it should be noted that visible solids remained in the final diluted solutions of all samples digested by this method at SRNL (8 samples total), which is typical for the DWPF CC method but not seen in the other methods. Recommendations to the DWPF for application to SB5 based on studies to date: (1) A dissolution study should be performed on the WAPS
Comparison between sampling and analytical methods in characterization of pollutants in biogas.
Mariné, Sílvia; Pedrouzo, Marta; Marcé, Rosa Maria; Fonseca, Ignacio; Borrull, Francesc
2012-10-15
Different sampling methods involving the collection of biogas by Tedlar bags or adsorption tubes, and different GC-MS injection systems, loop injection or cold trap injection (with bags or by tube desorption), were compared to establish the best method to determine the minority compounds in biogas from sewage treatment plants (STPs). A study of parameters is included, such as the stability of compounds in Tedlar bags or cartridges and the adsorption effect of some less volatile compounds in the thermal desorption system (TD). The optimized methods allowed to determine most compounds at low mgm(-3) levels. Among them, maximum values of D5 (4.84 mg m(-3)), decane (95-118 mg m(-3)) and H(2)S (2223 mg m(-3)) were found in biogas samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Research And Establishment Of The Analytical Procedure For/Of Sr-90 In Milk Samples
International Nuclear Information System (INIS)
Tran Thi Tuyet Mai; Duong Duc Thang; Nguyen Thi Linh; Bui Thi Anh Duong
2014-01-01
Sr-90 is an indicator for the transfer radionuclides from environment to human. This work was setup to build a procedure for Sr-90 determination in main popular foodstuff and focus to fresh milk. The deal of this work was establish procedure for Sr-90 , assessment for chemical yield and test sample of Vietnam fresh milk, also in this work, the QA, QC for the procedure was carried out using standard sample of IAEA. The work has been completed for the procedure of determination Sr-90 in milk. The chemical yield of recovery for Y-90 and Sr-90 were at 46.76 % ±1.25% and 0.78 ± 0.086, respectively. The QA & QC program was carried out using reference material IAEA-373. The result parse is appropriate equally and well agreement with the certificate value. Three reference samples were analyses with 15 measurements. The results of Sr-90 concentration after processing statistics given a value at 3.69 Bq/kg with uncertainty of 0.23 Bq/kg. The certificate of IAEA-154 for Sr-90 (half live 28.8 year) is the 6.9 Bq/kg, with the range 95% Confidence Interval as (6.0 -8.0 ) Bq/kg at 31st August 1987. After adjusting decay, the radioactivity at this time is 3.67 Bq/kg. It means that such the result of this work was perfect matching the value of stock index IAEA. Five Vietnam fresh milk samples were analyzed for Sr-90, the specific radioactivity of Sr-90 in milk were in a range from 0.032 to 0.041 Bq/l. (author)
Quality assurance in the pre-analytical phase of human urine samples by (1)H NMR spectroscopy.
Budde, Kathrin; Gök, Ömer-Necmi; Pietzner, Maik; Meisinger, Christine; Leitzmann, Michael; Nauck, Matthias; Köttgen, Anna; Friedrich, Nele
2016-01-01
Metabolomic approaches investigate changes in metabolite profiles, which may reflect changes in metabolic pathways and provide information correlated with a specific biological process or pathophysiology. High-resolution (1)H NMR spectroscopy is used to identify metabolites in biofluids and tissue samples qualitatively and quantitatively. This pre-analytical study evaluated the effects of storage time and temperature on (1)H NMR spectra from human urine in two settings. Firstly, to evaluate short time effects probably due to acute delay in sample handling and secondly, the effect of prolonged storage up to one month to find markers of sample miss-handling. A number of statistical procedures were used to assess the differences between samples stored under different conditions, including Projection to Latent Structure Discriminant Analysis (PLS-DA), non-parametric testing as well as mixed effect linear regression analysis. The results indicate that human urine samples can be stored at 10 °C for 24 h or at -80 °C for 1 month, as no relevant changes in (1)H NMR fingerprints were observed during these time periods and temperature conditions. However, some metabolites most likely of microbial origin showed alterations during prolonged storage but without facilitating classification. In conclusion, the presented protocol for urine sample handling and semi-automatic metabolite quantification is suitable for large-scale epidemiological studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Salinic, Slavisa [University of Kragujevac, Faculty of Mechanical Engineering, Kraljevo (RS)
2010-10-15
In this paper, an analytical solution for the problem of finding profiles of gravity flow discharge chutes required to achieve maximum exit velocity under Coulomb friction is obtained by application of variational calculus. The model of a particle which moves down a rough curve in a uniform gravitational field is used to obtain a solution of the problem for various boundary conditions. The projection sign of the normal reaction force of the rough curve onto the normal to the curve and the restriction requiring that the tangential acceleration be non-negative are introduced as the additional constraints in the form of inequalities. These inequalities are transformed into equalities by introducing new state variables. Although this is fundamentally a constrained variational problem, by further introducing a new functional with an expanded set of unknown functions, it is transformed into an unconstrained problem where broken extremals appear. The obtained equations of the chute profiles contain a certain number of unknown constants which are determined from a corresponding system of nonlinear algebraic equations. The obtained results are compared with the known results from the literature. (orig.)
International Nuclear Information System (INIS)
Arsenault, Louis-François; Millis, Andrew J; Neuberg, Richard; Hannah, Lauren A
2017-01-01
We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved. (paper)
Marine sampling in Malaysia coastal area: the challenge, problems and solution
International Nuclear Information System (INIS)
Norfaizal Mohamed; Khairul Nizam Razali; Mohd Rafaie Mohd Murtadza; Muhammad Amin Abdul Ghani; Zaharudin Ahmad; Abdul Kadir Ishak
2005-01-01
Malaysia Marine Radioactivity Database Development Project is one of the five research contracts that was signed between MINT and AELB. Three marine sampling expeditions had been carried out using K.L. PAUS vessel owned by Malaysian Fisheries Institute, Chendering, Terengganu. The first marine sampling expedition was taken place at East Coast Peninsular Malaysia waters on August 2003, followed on February 2004 at West Coast Peninsular Malaysia waters, and lastly at Sarawak-Sabah waters on July 2004. Many challenges and problems were faced when collecting sediment, water, biota and plankton sample during this marine sampling. (Author)
Lluveras-Tenorio, Anna; Mazurek, Joy; Restivo, Annalaura; Colombini, Maria Perla; Bonaduce, Ilaria
2012-01-01
This paper describes a method for reliably identifying saccharide materials in paintings. Since the 3(rd) millennium B.C., polysaccharide materials such as plant gums, sugar, flour, and honey were used as binding media and sizing agents in paintings, illuminated manuscripts, and polychrome objects. Although it has been reported that plant gums have a stable composition, their identification in paint samples is often doubtful and rarely discussed. Our research was carried out independently at two different laboratories: the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). It was shown in a previous stage of this research that the two methods give highly comparable data when analysing both reference paint samples and paint layers from art objects, thus the combined data was used to build a large database. In this study, the simultaneous presence of proteinaceous binders and pigments in fresh and artificially aged paint replicas was investigated, and it highlighted how these can affect the sugar profile of arabic, tragacanth, and fruit tree gums. The environmental contamination due to sugars from various plant tissues is also discussed. The results allowed the development of a new model for the reliable identification of saccharide binders in paintings based on the evaluation of markers that are stable to ageing and unaffected by pigments. This new model was applied to the sugar profiles obtained from the analysis of a large number of samples from murals, easel paintings, manuscripts, and polychrome objects from different geographical areas and dating from the 13(th) century BC to the 20(th) century AD, thus demonstrating its reliability.
Directory of Open Access Journals (Sweden)
Anna Lluveras-Tenorio
Full Text Available This paper describes a method for reliably identifying saccharide materials in paintings. Since the 3(rd millennium B.C., polysaccharide materials such as plant gums, sugar, flour, and honey were used as binding media and sizing agents in paintings, illuminated manuscripts, and polychrome objects. Although it has been reported that plant gums have a stable composition, their identification in paint samples is often doubtful and rarely discussed. Our research was carried out independently at two different laboratories: the Getty Conservation Institute in Los Angeles, USA (GCI and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI. It was shown in a previous stage of this research that the two methods give highly comparable data when analysing both reference paint samples and paint layers from art objects, thus the combined data was used to build a large database. In this study, the simultaneous presence of proteinaceous binders and pigments in fresh and artificially aged paint replicas was investigated, and it highlighted how these can affect the sugar profile of arabic, tragacanth, and fruit tree gums. The environmental contamination due to sugars from various plant tissues is also discussed. The results allowed the development of a new model for the reliable identification of saccharide binders in paintings based on the evaluation of markers that are stable to ageing and unaffected by pigments. This new model was applied to the sugar profiles obtained from the analysis of a large number of samples from murals, easel paintings, manuscripts, and polychrome objects from different geographical areas and dating from the 13(th century BC to the 20(th century AD, thus demonstrating its reliability.
Restivo, Annalaura; Colombini, Maria Perla; Bonaduce, Ilaria
2012-01-01
This paper describes a method for reliably identifying saccharide materials in paintings. Since the 3rd millennium B.C., polysaccharide materials such as plant gums, sugar, flour, and honey were used as binding media and sizing agents in paintings, illuminated manuscripts, and polychrome objects. Although it has been reported that plant gums have a stable composition, their identification in paint samples is often doubtful and rarely discussed. Our research was carried out independently at two different laboratories: the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). It was shown in a previous stage of this research that the two methods give highly comparable data when analysing both reference paint samples and paint layers from art objects, thus the combined data was used to build a large database. In this study, the simultaneous presence of proteinaceous binders and pigments in fresh and artificially aged paint replicas was investigated, and it highlighted how these can affect the sugar profile of arabic, tragacanth, and fruit tree gums. The environmental contamination due to sugars from various plant tissues is also discussed. The results allowed the development of a new model for the reliable identification of saccharide binders in paintings based on the evaluation of markers that are stable to ageing and unaffected by pigments. This new model was applied to the sugar profiles obtained from the analysis of a large number of samples from murals, easel paintings, manuscripts, and polychrome objects from different geographical areas and dating from the 13th century BC to the 20th century AD, thus demonstrating its reliability. PMID:23166654
International Nuclear Information System (INIS)
1994-05-01
This document introduces QA guidance pertaining to design and implementation of laboratory procedures and processes for collecting DOE Environmental Restoration and Waste Management (EM) ESAA (environmental sampling and analysis activities) data. It addresses several goals: identifying key laboratory issues and program elements to EM HQ and field office managers; providing non-prescriptive guidance; and introducing environmental data collection program elements for EM-263 assessment documents and programs. The guidance describes the implementation of laboratory QA elements within a functional QA program (development of the QA program and data quality objectives are not covered here)
Comparison of analytical methods for the determination of histamine in reference canned fish samples
Jakšić, S.; Baloš, M. Ž.; Mihaljev, Ž.; Prodanov Radulović, J.; Nešić, K.
2017-09-01
Two screening methods for histamine in canned fish, an enzymatic test and a competitive direct enzyme-linked immunosorbent assay (CD-ELISA), were compared with the reversed-phase liquid chromatography (RP-HPLC) standard method. For enzymatic and CD-ELISA methods, determination was conducted according to producers’ manuals. For RP-HPLC, histamine was derivatized with dansyl-chloride, followed by RP-HPLC and diode array detection. Results of analysis of canned fish, supplied as reference samples for proficiency testing, showed good agreement when histamine was present at higher concentrations (above 100 mg kg-1). At a lower level (16.95 mg kg-1), the enzymatic test produced some higher results. Generally, analysis of four reference samples according to CD-ELISA and RP-HPLC showed good agreement for histamine determination (r=0.977 in concentration range 16.95-216 mg kg-1) The results show that the applied enzymatic test and CD-ELISA appeared to be suitable screening methods for the determination of histamine in canned fish.
Directory of Open Access Journals (Sweden)
Elodie Caboux
Full Text Available The European Prospective Investigation into Cancer and nutrition (EPIC is a long-term, multi-centric prospective study in Europe investigating the relationships between cancer and nutrition. This study has served as a basis for a number of Genome-Wide Association Studies (GWAS and other types of genetic analyses. Over a period of 5 years, 52,256 EPIC DNA samples have been extracted using an automated DNA extraction platform. Here we have evaluated the pre-analytical factors affecting DNA yield, including anthropometric, epidemiological and technical factors such as center of subject recruitment, age, gender, body-mass index, disease case or control status, tobacco consumption, number of aliquots of buffy coat used for DNA extraction, extraction machine or procedure, DNA quantification method, degree of haemolysis and variations in the timing of sample processing. We show that the largest significant variations in DNA yield were observed with degree of haemolysis and with center of subject recruitment. Age, gender, body-mass index, cancer case or control status and tobacco consumption also significantly impacted DNA yield. Feedback from laboratories which have analyzed DNA with different SNP genotyping technologies demonstrate that the vast majority of samples (approximately 88% performed adequately in different types of assays. To our knowledge this study is the largest to date to evaluate the sources of pre-analytical variations in DNA extracted from peripheral leucocytes. The results provide a strong evidence-based rationale for standardized recommendations on blood collection and processing protocols for large-scale genetic studies.
Directory of Open Access Journals (Sweden)
Israel Sánchez-Moreno
2017-01-01
Full Text Available Hypolactasia, or intestinal lactase deficiency, affects more than half of the world population. Currently, xylose quantification in urine after gaxilose oral administration for the noninvasive diagnosis of hypolactasia is performed with the hand-operated nonautomatable phloroglucinol reaction. This work demonstrates that a new enzymatic xylose quantification method, based on the activity of xylose dehydrogenase from Caulobacter crescentus, represents an excellent alternative to the manual phloroglucinol reaction. The new method is automatable and facilitates the use of the gaxilose test for hypolactasia diagnosis in the clinical practice. The analytical validation of the new technique was performed in three different autoanalyzers, using buffer or urine samples spiked with different xylose concentrations. For the comparison between the phloroglucinol and the enzymatic assays, 224 urine samples of patients to whom the gaxilose test had been prescribed were assayed by both methods. A mean bias of −16.08 mg of xylose was observed when comparing the results obtained by both techniques. After adjusting the cut-off of the enzymatic method to 19.18 mg of xylose, the Kappa coefficient was found to be 0.9531, indicating an excellent level of agreement between both analytical procedures. This new assay represents the first automatable enzymatic technique validated for xylose quantification in urine.
Measuring public opinion on alcohol policy: a factor analytic study of a US probability sample.
Latimer, William W; Harwood, Eileen M; Newcomb, Michael D; Wagenaar, Alexander C
2003-03-01
Public opinion has been one factor affecting change in policies designed to reduce underage alcohol use. Extant research, however, has been criticized for using single survey items of unknown reliability to define adult attitudes on alcohol policy issues. The present investigation addresses a critical gap in the literature by deriving scales on public attitudes, knowledge, and concerns pertinent to alcohol policies designed to reduce underage drinking using a US probability sample survey of 7021 adults. Five attitudinal scales were derived from exploratory and confirmatory factor analyses addressing policies to: (1) regulate alcohol marketing, (2) regulate alcohol consumption in public places, (3) regulate alcohol distribution, (4) increase alcohol taxes, and (5) regulate youth access. The scales exhibited acceptable psychometric properties and were largely consistent with a rational framework which guided the survey construction.
Energy Technology Data Exchange (ETDEWEB)
West, O.R.; Bayne, C.K.; Siegrist, R.L.; Holden, W.L. [Oak Ridge National Lab., TN (United States); Bottrell, D.W. [Dept. of Energy, Germantown, MD (United States)
1997-03-01
This study was undertaken to examine the hypothesis that prevalent and priority purgeable VOCs in properly preserved water samples are stable for at least 28 days. For the purposes of this study, VOCs were considered functionally stable if concentrations measured after 28 days did not change by more than 10% from the initial values. An extensive stability experiment was performed on freshly-collected surface water spiked with a suite of 44 purgeable VOCs. The spiked water was then distributed into multiple 40-mL VOC vials with 0.010-in Teflon-lined silicone septum caps prefilled with 250 mg of NaHSO{sub 4} (resulting pH of the water {approximately}2). The samples were sent to a commercial [Analytical Resources, Inc. (ARI)] and EPA (Region IV) laboratory where they were stored at 4 C. On 1, 8, 15, 22, 29, 36, and 71 days after sample preparation, analysts from ARI took 4 replicate samples out of storage and analyzed these samples for purgeable VOCs following EPA/SW846 8260A. A similar analysis schedule was followed by analysts at the EPA laboratory. This document contains the results from the EPA analyses; the ARI results are described in a separate report.
International Nuclear Information System (INIS)
Marts, D.J.
1987-05-01
A study of alternate methods to manually transport radioactive samples from their glove boxes to the Remote Analytical Laboratory (RAL) was conducted at the Idaho National Engineering Laboratory. The study was performed to mitigate the effects of a potential loss of sampling capabilities that could take place if a malfunction in the Pneumatic Transfer System (PTS) occurred. Samples are required to be taken from the cell glove boxes and analyzed at the RAL regardless of the operational status of the PTS. This paper documents the conclusions of the study and how a decision was reached that determined the best handling scenarios for manually transporting 15 mL vials of liquid process samples from the K, W, U, WG, or WH cell glove boxes in the Chemical Processing Plant (CPP) 601 to the RAL. This study of methods to manually remove the samples from the glove boxes, package them for safe shipment, transport them by the safest route, receive them at the RAL, and safely unload them was conducted by EG and G Idaho, Inc., for Westinghouse Idaho Nuclear Company as part of the Glove Box Sampling and Transfer System Project for the Fuel Processing Facilities Upgrade, Task 10, Subtask 2. The study focused on the safest and most reliable scenarios that could be implemented using existing equipment. Hardware modifications and new hardware proposals were identified, and their impact on the handling scenario has been evaluated. A conclusion was reached that by utilizing the existing facility hardware, these samples can be safely transported manually from the sample stations in CPP 601 to the RAL, and that additional hardware could facilitate the transportation process even further
Robustness to non-normality of various tests for the one-sample location problem
Directory of Open Access Journals (Sweden)
Michelle K. McDougall
2004-01-01
Full Text Available This paper studies the effect of the normal distribution assumption on the power and size of the sign test, Wilcoxon's signed rank test and the t-test when used in one-sample location problems. Power functions for these tests under various skewness and kurtosis conditions are produced for several sample sizes from simulated data using the g-and-k distribution of MacGillivray and Cannon [5].
Quarles, C Derrick; Randunu, K Manoj; Brumaghim, Julia L; Marcus, R Kenneth
2011-10-01
The analysis of metal-binding proteins requires careful sample manipulation to ensure that the metal-protein complex remains in its native state and the metal retention is preserved during sample preparation or analysis. Chemical analysis for the metal content in proteins typically involves some type of liquid chromatography/electrophoresis separation step coupled with an atomic (i.e., inductively coupled plasma-optical emission spectroscopy or -mass spectrometry) or molecular (i.e., electrospray ionization-mass spectrometry) analysis step that requires altered-solvent introduction techniques. UV-VIS absorbance is employed here to monitor the iron content in human holo-transferrin (Tf) under various solvent conditions, changing polarity, pH, ionic strength, and the ionic and hydrophobic environment of the protein. Iron loading percentages (i.e. 100% loading equates to 2 Fe(3+):1 Tf) were quantitatively determined to evaluate the effect of solvent composition on the retention of Fe(3+) in Tf. Maximum retention of Fe(3+) was found in buffered (20 mM Tris) solutions (96 ± 1%). Exposure to organic solvents and deionized H(2)O caused release of ~23-36% of the Fe(3+) from the binding pocket(s) at physiological pH (7.4). Salt concentrations similar to separation conditions used for ion exchange had little to no effect on Fe(3+) retention in holo-Tf. Unsurprisingly, changes in ionic strength caused by additions of guanidine HCl (0-10 M) to holo-Tf resulted in unfolding of the protein and loss of Fe(3+) from Tf; however, denaturing and metal loss was found not to be an instantaneous process for additions of 1-5 M guanidinium to Tf. In contrast, complete denaturing and loss of Fe(3+) was instantaneous with ≥6 M additions of guanidinium, and denaturing and loss of iron from Tf occurred in parallel proportions. Changes to the hydrophobicity of Tf (via addition of 0-14 M urea) had less effect on denaturing and release of Fe(3+) from the Tf binding pocket compared to changes
Language Problems Among Abused and Neglected Children: A Meta-Analytic Review.
Sylvestre, Audette; Bussières, Ève-Line; Bouchard, Caroline
2016-02-01
Research data show that exposure to abuse and neglect has detrimental effects on a child's language development. In this meta-analysis, we analyze studies (k = 23), to compare the language skills (receptive language, expressive language, pragmatics) of children who have experienced abuse and/or neglect with the language skills of children who have not experienced abuse and/or neglect and to examine whether age or type of maltreatment moderate the relationship between maltreatment and language skills. Results confirm that the language skills of children who have experienced abuse and/or neglect are delayed when compared to children who have not experienced abuse and/or neglect. Compared to older children, young children seem particularly vulnerable to abuse and neglect. No significant differences were demonstrated concerning the type of maltreatment suffered by the child. These findings support the necessity of early detection of language problems in abused and neglected children as well as early intervention in order to implement interventions that will positively stimulate their development. © The Author(s) 2015.
A study of some political problems considering current geographical analytical parameters
Directory of Open Access Journals (Sweden)
Héctor Adolfo Dupuy
2008-01-01
Full Text Available This paper intends to study some of the main problems presented, on different scales, by current politics, considering the spatial implications as well as various parameters offered lately by the geographical science. The proposal is supported by the theoretical bases developed from Emmanuel Wallerstein and Peter Taylor's statement about a world system structured as a world economy, based on the capitalist mode of production. Conversely, it attempts to provide a theoretical explanation for the dynamics experienced by the territories upon thee basis of such world system's mechanics. According to these assumptions, an analysis is proposed of some of the main phenomena resulting from the previous analysis and its spatial implications, such as the current power relations in the system, the subsistence of traditional power factors (Nation-states, ethnically based cultural configurations and the appearance of new ones (the forming of transnational blocs and associations, new social movements, new forms of local participation, the importance of hegemonic and counter-hegemonic discourses in the construction of symbolic representations and of the mass media in such processes or the new cultural identity and hybridization chart from population mobility.
A novel four-dimensional analytical approach for analysis of complex samples.
Stephan, Susanne; Jakob, Cornelia; Hippler, Jörg; Schmitz, Oliver J
2016-05-01
A two-dimensional LC (2D-LC) method, based on the work of Erni and Frei in 1978, was developed and coupled to an ion mobility-high-resolution mass spectrometer (IM-MS), which enabled the separation of complex samples in four dimensions (2D-LC, ion mobility spectrometry (IMS), and mass spectrometry (MS)). This approach works as a continuous multiheart-cutting LC system, using a long modulation time of 4 min, which allows the complete transfer of most of the first - dimension peaks to the second - dimension column without fractionation, in comparison to comprehensive two-dimensional liquid chromatography. Hence, each compound delivers only one peak in the second dimension, which simplifies the data handling even when ion mobility spectrometry as a third and mass spectrometry as a fourth dimension are introduced. The analysis of a plant extract from Ginkgo biloba shows the separation power of this four-dimensional separation method with a calculated total peak capacity of more than 8700. Furthermore, the advantage of ion mobility for characterizing unknown compounds by their collision cross section (CCS) and accurate mass in a non-target approach is shown for different matrices like plant extracts and coffee. Graphical abstract Principle of the four-dimensional separation.
ANALYTICAL RESULTS OF MOX COLEMANITE CONCRETE SAMPLE PBC-44.2
Energy Technology Data Exchange (ETDEWEB)
Best, D.; Cozzi, A.; Reigel, M.
2012-12-20
The Mixed Oxide Fuel Fabrication Facility (MFFF) will use colemanite bearing concrete neutron absorber panels credited with attenuating neutron flux in the criticality design analyses and shielding operators from radiation. The Savannah River National Laboratory is tasked with measuring the total density, partial hydrogen density, and partial boron density of the colemanite concrete. Sample PBC-44.2 was received on 9/20/2012 and analyzed. The average total density measured by the ASTM method C 642 was 2.03 g/cm{sup 3}, within the lower bound of 1.88 g/cm3. The average partial hydrogen density was 6.64E-02 g/cm{sup 3} as measured using method ASTM E 1311 and met the lower bound of 6.04E-02 g/cm{sup 3}. The average measured partial boron density was 1.70E-01 g/cm{sup 3} which met the lower bound of 1.65E-01 g/cm{sup 3} measured by the ASTM C 1301 method.
Directory of Open Access Journals (Sweden)
D Cébron
2016-04-01
Full Text Available The present paper is concerned with the numerical simulation of Magneto-Hydro-Dynamic (MHD problems with industrial tools. MHD has receivedattention some twenty to thirty years ago as a possible alternative inpropulsion applications; MHD propelled ships have even been designed forthat purpose. However, such propulsion systems have been proved of lowefficiency and fundamental researches in the area have progressivelyreceived much less attention over the past decades. Numerical simulationof MHD problem could however provide interesting solutions in the field ofturbulent flow control. The development of recent efficient numericaltechniques for multi-physic applications provide promising tool for theengineer for that purpose. In the present paper, some elementary testcases in laminar flow with magnetic forcing terms are analysed; equationsof the coupled problem are exposed, analytical solutions are derived ineach case and are compared to numerical solutions obtained with anumerical tool for multi-physic applications. The present work can be seenas a validation of numerical tools (based on the finite element method foracademic as well as industrial application purposes.
Zehavi, D; Seiber, J N
1996-10-01
An analytical method has been developed for the determination of trace levels of trifluoroacetic acid (TFA), an atmospheric breakdown product of several of the hydrofluorocarbon (HFC) and hydrochlorofluorocarbon (HCFC) replacements for the chlorofluorocarbon (CFC) refrigerants, in water and air. TFA is derivatized to the volatile methyl trifluoroacetate (MTFA) and determined by automated headspace gas chromatography (HSGC) with electron-capture detection or manual HSGC using GC/MS in the selected ion monitoring (SIM) mode. The method is based on the reaction of an aqueous sample containing TFA with dimethyl sulfate (DMS) in concentrated sulfuric acid in a sealed headspace vial under conditions favoring distribution of MTFA to the vapor phase. Water samples are prepared by evaporative concentration, during which TFA is retained as the anion, followed by extraction with diethyl ether of the acidified sample and then back-extraction of TFA (as the anion) in aqueous bicarbonate solution. The extraction step is required for samples with a relatively high background of other salts and organic materials. Air samples are collected in sodium bicarbonate-glycerin-coated glass denuder tubes and prepared by rinsing the denuder contents with water to form an aqueous sample for derivatization and analysis. Recoveries of TFA from spiked water, with and without evaporative concentration, and from spiked air were quantitative, with estimated detection limits of 10 ng/mL (unconcentrated) and 25 pg/mL (concentrated 250 mL:1 mL) for water and 1 ng/m(3) (72 h at 5 L/min) for air. Several environmental air, fogwater, rainwater, and surface water samples were successfully analyzed; many showed the presence of TFA.
SIPPI: A Matlab toolbox for sampling the solution to inverse problems with complex prior information
DEFF Research Database (Denmark)
Hansen, Thomas Mejer; Cordua, Knud Skou; Looms, Majken Caroline
2013-01-01
We present an application of the SIPPI Matlab toolbox, to obtain a sample from the a posteriori probability density function for the classical tomographic inversion problem. We consider a number of different forward models, linear and non-linear, such as ray based forward models that rely...
Improving Creative Problem-Solving in a Sample of Third Culture Kids
Lee, Young Ju; Bain, Sherry K.; McCallum, R. Steve
2007-01-01
We investigated the effects of divergent thinking training (with explicit instruction) on problem-solving tasks in a sample of Third Culture Kids (Useem and Downie, 1976). We were specifically interested in whether the children's originality and fluency in responding increased following instruction, not only on classroom-based worksheets and the…
International Nuclear Information System (INIS)
Barros, R. C.; Filho, H. A.; Platt, G. M.; Oliveira, F. B. S.; Militao, D. S.
2009-01-01
Coarse-mesh numerical methods are very efficient in the sense that they generate accurate results in short computational time, as the number of floating point operations generally decrease, as a result of the reduced number of mesh points. On the other hand, they generate numerical solutions that do not give detailed information on the problem solution profile, as the grid points can be located considerably away from each other. In this paper we describe two analytical reconstruction schemes for the coarse-mesh solution generated by the spectral nodal method for neutral particle discrete ordinates (S N ) transport model in slab geometry. The first scheme we describe is based on the analytical reconstruction of the coarse-mesh solution within each discretization cell of the spatial grid set up on the slab. The second scheme is based on the angular reconstruction of the discrete ordinates solution between two contiguous ordinates of the angular quadrature set used in the S N model. Numerical results are given so we can illustrate the accuracy of the two reconstruction schemes, as described in this paper. (authors)
Stewart, Ian; Eaglesham, Geoffrey K; Poole, Sue; Graham, Glenn; Paulo, Carl; Wickramasinghe, Wasantha; Sadler, Ross; Shaw, Glen R
2010-10-01
A referee analysis method for the detection and quantification of Pacific ciguatoxins in fish flesh has recently been established by the public health analytical laboratory for the State of Queensland, Australia. Fifty-six fish samples were analysed, which included 10 fillets purchased as negative controls. P-CTX-1 was identified in 27 samples, and P-CTX-2 and P-CTX-3 were found in 26 of those samples. The range of P-CTX-1 concentrations was 0.04-11.4 microg/kg fish flesh; coefficient of variation from 90 replicate analyses was 7.4%. A liquid chromatography/tandem mass spectrometry (HPLC-MS/MS) method utilising a rapid methanol extraction and clean-up is reliable and reproducible, with the detection limit at 0.03 microg/kg fish flesh. Some matrix effects are evident, with fish oil content a likely signal suppression factor. Species identification of samples by DNA sequence analysis revealed some evidence of fish substitution or inadvertent misidentification, which may have implications for the management and prevention of ciguatera poisoning. Blinded inspection of case notes from suspect ciguatera poisoning cases showed that reporting of ciguatera-related paraesthesias was highly predictable for the presence of ciguatoxins in analysed fish, with 13 of 14 expected cases having consumed fish that contained P-CTX-1 (p<0.001, Fishers Exact Test). Crown Copyright 2009. Published by Elsevier Ltd. All rights reserved.
Toward a mathematical theory of environmental monitoring: the infrequent sampling problem
International Nuclear Information System (INIS)
Pimentel, K.D.
1975-06-01
Optimal monitoring of pollutants in diffusive environmental media was studied in the contexts of the subproblems of the optimal design and management of environmental monitors for bounds on maximum allowable errors in the estimate of the monitor state or output variables. Concise problem statements were made. Continuous-time finite-dimensional normal mode models for distributed stochastic diffusive pollutant transport were developed. The resultant set of state equations was discretized in time for implementation in the Kalman Filter in the problem of optimal state estimation. The main results of this thesis concern the special class of optimal monitoring problem called the infrequent sampling problem. Extensions to systems including pollutant scavenging and systems with emission or radiation boundary conditions were made. (U.S.)
Symptoms and problems in a nationally representative sample of advanced cancer patients
DEFF Research Database (Denmark)
Johnsen, Anna Thit; Petersen, Morten Aagaard; Pedersen, Lise
2009-01-01
Little is known about the need for palliative care among advanced cancer patients who are not in specialist palliative care. The purpose was to identify prevalence and predictors of symptoms and problems in a nationally representative sample of Danish advanced cancer patients. Patients with cancer...... or not were associated with several symptoms and problems. This is probably the first nationally representative study of its kind. It shows that advanced cancer patients in Denmark have symptoms and problems that deserve attention and that some patient groups are especially at risk....... predictors. In total, 977 (60%) patients participated. The most frequent symptoms/problems were fatigue (57%; severe 22%) followed by reduced role function, insomnia and pain. Age, cancer stage, primary tumour, type of department, marital status and whether the patient had recently been hospitalized...
International Nuclear Information System (INIS)
Noppe, H.; Verheyden, K.; Gillis, W.; Courtheyn, D.; Vanthemsche, P.; De Brabander, H.F.
2007-01-01
Since the 1970s, many analytical methods for the detection of illegal growth promoters, such as thyreostats, anabolics, β-agonists and corticosteroids have been developed for a wide range of matrices of animal origin, including meat, fat, organ tissue, urine and faeces. The aim of this study was to develop an analytical method for the determination of ng L -1 levels of estrogens, gestagens, androgens (EGAs) and corticosteroids in aqueous preparations (i.e. drinking water, drinking water supplements), commercially available on the 'black' market. For this, extraction was performed with Bakerbond C 18 speedisk, a technique commonly used in environmental analysis. After fractionation, four fractions were collected using a methanol:water gradient program. Gas chromatography coupled to electron impact multiple mass spectrometry (GC-EI-MS 2 ) screening for the EGAs was carried out on the derivatized extracts. For the detection of corticosteroids, gas chromatography coupled to negative chemical ionization mass spectrometry (GC-NCI-MS) was used after oxidation of the extracts. Confirmation was done by liquid chromatography coupled to electrospray ionization multiple mass spectrometry (LC-ESI-MS 2 ). The combined use of GC and LC coupled to MS enabled the identification and quantification of anabolics and corticosteroids at the low ng L -1 level. This study demonstrated the occurrence of both androgens and corticosteroids in different commercial aqueous samples
Determination of 93Zr, 107Pd and 135Cs in zircaloy hulls analytical development on inactive samples
International Nuclear Information System (INIS)
Excoffier, E.; Bienvenu, Ph.; Combes, C.; Pontremoli, S.; Delteil, N.; Ferrini, R.
2000-01-01
A study involving the participation of three laboratories of the Direction of the Fuel Cycle has been undertaken within the framework of a common interest program existing between the COGEMA and the CEA. Its purpose is to develop analytical methods for the determination of long-lived radionuclides in zircaloy hulls coming from spent fuel reprocessing operations. Acting as a complement to work carried out at the DRRV in ATALANTE concerning zircaloy dissolution and direct analysis of hull solutions, a study is now being conducted at the DESD/SCCD/LARC in Cadarache on three of these radionuclides, namely: zirconium 93, palladium 107 and caesium 135. It concerns three radioisotopes having very long periods (∼10 6 y), and which stabilize mainly through emission of β particles. The analytical technique chosen for the final measurement is inductively coupled plasma mass spectrometry (ICP/MS). Prior to the measurement, chemical separation processes are used to extract the radionuclides from the matrix and separate them from interfering elements and β emitters. The method developed initially on inactive solutions is being validated on irradiated samples coming from UP2/800 - UP3 reprocessing plants. (authors)
International Nuclear Information System (INIS)
Esch, R.A.
1997-01-01
This document is the final report for the double-contained receiver tank (DCRT) 244-TX grab samples. Three grabs samples were collected from riser 8 on May 29, 1997. Analyses were performed in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO). The analytical results are presented in a table
Alonso, Cristina; Romero, Estrella
2017-03-01
In parallel to the rapid growth of access to new technologies (NT) there has been an increase in the problematic use of the same, especially among children and adolescents. Although research in this field is increasing, the studies have mainly been developed in the community, and the characteristics associated with the problematic use of NT are unknown in samples that require clinical care. Therefore, the aim of this study is to analyze the relationship between problematic use of video games (UPV) and Internet (UPI) and personality traits and behavior problems in a clinical sample of children and adolescents. The sample consists of 88 patients who were examined in the clinical psychology consultation in the Mental Health Unit for Children and Adolescents of the University Hospital of Santiago de Compostela. Data were obtained from self-reports and rating scales filled out by parents. 31.8% of the participants present UPI and 18.2%, UPV. The children and adolescents with UPNT have lower levels of Openness to experience, Conscientiousness and Agreeableness and higher levels of Emotional instability, global Impulsivity and Externalizing behavior problems, as well as Attention and Thought problems. UPNT is a problem that emerges as an important issue in clinical care for children and adolescents, so its study in child and youth care units is needed. Understanding the psychopathological profile of children and adolescents with UPNT will allow for the development of differential and more specific interventions.
Andrade, Brendan F; Tannock, Rosemary
2013-11-01
This study tested whether children's symptoms of inattention and hyperactivity/impulsivity were associated with peer problems and whether these associations were mediated by conduct problems and prosocial behaviors. A community sample of 500 children, including 245 boys and 255 girls, who ranged in age from 6 to 9 years (M = 7.6, SD = 0.91) were recruited. Teachers' report of children's inattention, hyperactivity/impulsivity, conduct problems, prosocial behaviors, and peer problems was collected. Symptoms of inattention and hyperactivity/impulsivity were significantly positively associated with peer problems. Conduct problems were associated with more peer problems and prosocial behaviors with less peer problems. Conduct problems and prosocial behaviors partially mediated the association between hyperactivity/impulsivity and peer problems and fully mediated the inattention-peer problems association. Findings show that prosocial behaviors and conduct problems are important variables that account for some of the negative impact of symptoms of inattention and hyperactivity/impulsivity on peer functioning.
Energy Technology Data Exchange (ETDEWEB)
Correa, R.A. [Departamento de Quimica Analitica, Facultad de Ciencias Bioquimicas y Farmaceuticas, Universidad Nacional de Rosario, Suipacha 531 (2000) Rosario (Argentina); Escandar, G.M. [Departamento de Quimica Analitica, Facultad de Ciencias Bioquimicas y Farmaceuticas, Universidad Nacional de Rosario, Suipacha 531 (2000) Rosario (Argentina)]. E-mail: gescanda@fbioyf.unr.edu.ar
2006-06-30
This paper discusses the first analytical determination of the widely used fungicide thiabendazole by nylon-induced phosphorimetry. Nylon was investigated as a novel solid-matrix for inducing room-temperature phosphorescence of thiabendazole, which was enhanced under the effect of external heavy-atom salts. Among the investigated salts, lead(II) acetate was the most effective in yielding a high phosphorescence signal. An additional enhancement of the phosphorescence emission was attained when the measurements were carried out under a nitrogen atmosphere. There was only a moderate increase in the presence of cyclodextrins. The room-temperature phosphorescence lifetimes of the adsorbed thiabendazole were measured under different working conditions and, in all cases, two decaying components were detected. On the basis of the obtained results, a very simple and sensitive phosphorimetric method for the determination of thiabendazole was established. The analytical figures of merit obtained under the best experimental conditions were: linear calibration range from 0.031 to 0.26 {mu}g ml{sup -1} (the lowest value corresponds to the quantitation limit), relative standard deviation, 2.4% (n = 5) at a level of 0.096 {mu}g ml{sup -1}, and limit of detection calculated according to 1995 IUPAC Recommendations equal to 0.010 {mu}g ml{sup -1} (0.03 ng/spot). The potential interference from common agrochemicals was also studied. The feasibility of determining thiabendazole in real samples was successfully evaluated through the analysis of spiked river, tap and mineral water samples.
Kukusamude, Chunyapuk; Srijaranai, Supalax; Quirino, Joselito P
2014-05-01
The common SDS microemulsion (i.e. 3.3% SDS, 0.8% octane, and 6.6% butanol) and organic solvents were investigated for the stacking of cationic drugs in capillary zone electrophoresis using a low pH separation electrolyte. The sample was prepared in the acidic microemulsion and a high percentage of organic solvent was included in the electrolyte at anodic end of capillary. The stacking mechanism was similar to micelle to solvent stacking where the micelles were replaced by the microemulsion for the transport of analytes to the organic solvent rich boundary. This boundary is found between the microemulsion and anodic electrolyte. The effective electrophoretic mobility of the cations reversed from the direction of the anode in the microemulsion to the cathode in the boundary. Microemulsion to solvent stacking was successfully achieved with 40% ACN in the anodic electrolyte and hydrodynamic sample injection of 21 s at 1000 mbar (equivalent to 30% of the effective length). The sensitivity enhancement factors in terms of peak height and corrected peak area were 15 to 35 and 21 to 47, respectively. The linearity R(2) in terms of corrected peak area were >0.999. Interday precisions (%RSD, n = 6) were 3.3-4.0% for corrected peak area and 2.0-3.0% for migration time. Application to spiked real sample is also presented. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Smith, D.B.; Woodruff, L.G.; O'Leary, R. M.; Cannon, W.F.; Garrett, R.G.; Kilburn, J.E.; Goldhaber, M.B.
2009-01-01
In 2004, the US Geological Survey (USGS) and the Geological Survey of Canada sampled and chemically analyzed soils along two transects across Canada and the USA in preparation for a planned soil geochemical survey of North America. This effort was a pilot study to test and refine sampling protocols, analytical methods, quality control protocols, and field logistics for the continental survey. A total of 220 sample sites were selected at approximately 40-km intervals along the two transects. The ideal sampling protocol at each site called for a sample from a depth of 0-5 cm and a composite of each of the O, A, and C horizons. The Ca, Fe, K, Mg, Na, S, Ti, Ag, As, Ba, Be, Bi, Cd, Ce, Co, Cr, Cs, Cu, Ga, In, La, Li, Mn, Mo, Nb, Ni, P, Pb, Rb, Sb, Sc, Sn, Sr, Te, Th, Tl, U, V, W, Y, and Zn by inductively coupled plasma-mass spectrometry and inductively coupled plasma-atomic emission spectrometry following a near-total digestion in a mixture of HCl, HNO3, HClO4, and HF. Separate methods were used for Hg, Se, total C, and carbonate-C on this same size fraction. Only Ag, In, and Te had a large percentage of concentrations below the detection limit. Quality control (QC) of the analyses was monitored at three levels: the laboratory performing the analysis, the USGS QC officer, and the principal investigator for the study. This level of review resulted in an average of one QC sample for every 20 field samples, which proved to be minimally adequate for such a large-scale survey. Additional QC samples should be added to monitor within-batch quality to the extent that no more than 10 samples are analyzed between a QC sample. Only Cr (77%), Y (82%), and Sb (80%) fell outside the acceptable limits of accuracy (% recovery between 85 and 115%) because of likely residence in mineral phases resistant to the acid digestion. A separate sample of 0-5-cm material was collected at each site for determination of organic compounds. A subset of 73 of these samples was analyzed for a suite of
Directory of Open Access Journals (Sweden)
LIANG Cui-cui
2015-01-01
Full Text Available An analytical method for measuring carbon and oxygen isotopic compositions of trace amount carbonate (＞15 μg was established by Delta V Advantage isotope Ratio MS coupled with GasBench Ⅱ. Different trace amount (5-50 μg carbonate standard samples (IAEA-CO-1 were measured by GasBench Ⅱ with 12 mL and 3.7 mL vials. When the weight of samples was less than 40 μg and it was acidified in 12 mL vials, most standard deviations of the δ13C and δ18O were more than 0.1‰, which couldn’t satisfied high-precision measurements. When the weight of samples was greater than 15 μg and it was acidified in 3.7 mL vials, standard deviations for the δ13C and δ18O were 0.01‰-0.07‰ and 0.01‰-0.08‰ respectively, which satisfied high-precision measurements. Therefore, small 3.7 mL vials were used to increase the concentration of carbon dioxide in headspace, carbonate samples even less as 15 μg can be analyzed routinely by a GasBench Ⅱ continuous-flow IRMS. Meanwhile, the linear relationship between sample’s weight and peak’s area was strong (R2＞0.993 2 and it can be used to determine the carbon content of carbonate samples.
Reichel, Christian
2010-01-01
Electrophoretic techniques, namely isoelectric focusing polyacrylamide gel electrophoresis (IEF-PAGE) and sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) are key techniques used for confirming the doping-related abuse of recombinant erythropoietins and analogs. IEF-PAGE is performed on horizontal slab-gels with samples applied to the surface of the gel. Different sample application techniques can be employed, but application pieces and applicator strips are most frequently used. However, defective application pieces cause lane streaking during IEF of erythropoietin (EPO), which is especially pronounced in the acidic region of the gel. The effect is due to an incompatibility of the substance used for enhancing the wettability of the cellulose-based commercial product and is batch-dependent. A detailed mass spectrometric study was performed, which revealed that defective sample application pieces (bought between 2007 and 2010) contained a complex mixture of alcohol ethoxylates, alcohol ethoxysulfates, and alkyl sulfates (e.g. SDS). Anionic detergents, like the sulfates contained in these application pieces, are in general incompatible with IEF. Alternative application techniques proved partly useful. While homemade pieces made of blotting paper are a good alternative, the usage of applicator strips or shims is hampered by the risk of leaking wells, which lead to laterally diffused samples. Casting IEF-gels with wells appears to be the best solution, since sustained release of retained proteins from the application pieces can be avoided. Edge effects do not occur if wells are correctly filled with the samples. The evaluation of EPO-profiles with defects is prohibited by the technical document on EPO-analytics (TD2009EPO) of the World Anti-Doping Agency (WADA). Copyright © 2010 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
A. L. Lapikov
2014-01-01
Full Text Available The paper concerns the solution of direct kinematic problem for the Stewart-Gough platform of the type 6-3. The article represents a detailed analysis of methods of direct kinematic problem solution for platform mechanisms based on the parallel structures. The complexity of the problem solution is estimated for the mechanisms of parallel kinematics in comparison with the classic manipulators, characterized by the open kinematic chain.The method for the solution of this problem is suggested. It consists in setting up the correspondence between the functional dependence of Cartesian coordinates and the orientation of the moving platform centre on the values of generalized coordinates of the manipulator, which may be represented, in the case of platform manipulators, by the lengths of extensible arms to connect the foundation and the moving platform of the manipulator. The method is constructed in such a way that the solution of the direct kinematic problem reduces to solution of the analytical equation of plane where the moving platform is situated. The equation of the required plane is built according to three points which in this case are attachment points of moving platform joints. To define joints coordinates values it is necessary to generate a system of nine nonlinear equations. It ought to be noted that in generating a system of equation are used the equations with the same type of nonlinearity. The physical meaning of all nine equations of the system is Euclidean distance between the points of the manipulator. The location and orientation of the moving platform is represented as a homogeneous transformation matrix. The components of translation and rotation of this matrix can be defined through the required plane.The obtained theoretical results are supposed to be used in the decision support system during the complex research of multi-sectional manipulators of parallel kinematics to describe the geometrically similar 3D-prototype of the
Elsheikh, Ahmed H.
2014-02-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.
Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems
Czech Academy of Sciences Publication Activity Database
Šmíd, Martin
2012-01-01
Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf
Food insecurity and mental health problems among a community sample of young adults.
Pryor, Laura; Lioret, Sandrine; van der Waerden, Judith; Fombonne, Éric; Falissard, Bruno; Melchior, Maria
2016-08-01
Food insecurity has been found to be related to anxiety and depression; however, the association with other psychiatric disorders, particularly among young adults, is not well known. We examined whether food insecurity is independently associated with four common mental health problems among a community sample of young adults in France. Data are from the TEMPO longitudinal cohort study. In 1991, participants' parents provided information on health and family socioeconomic characteristics. In 2011, participants' (18-35 years) reported food insecurity, mental health symptoms, and socioeconomic conditions (n = 1214). Mental health problems ascertained included major depressive episode, suicidal ideation, attention deficit and hyperactivity disorder, and substance abuse and/or dependence (nicotine, alcohol and cannabis). Cross-sectional associations between food insecurity and mental health problems were tested using modified Poisson regressions, weighted by inverse probability weights (IPW) of exposure. This makes food insecure and not food insecure participants comparable on all characteristics including socioeconomic factors and past mental health problems. 8.5 % of young adults were food insecure. In IPW-controlled analyses, food insecurity was associated with increased levels of depression (RR = 2.01, 95 % CI 1.01-4.02), suicidal ideation (RR = 3.23, 95 % CI 1.55-6.75) and substance use problems (RR = 1.68, 95 % CI 1.15-2.46). Food insecurity co-occurs with depression, suicidal ideation and substance use problems in young adulthood. Our findings suggest that reductions in food insecurity during this important life period may help prevent mental health problems. Policies aiming to alleviate food insecurity should also address individuals' psychiatric problems, to prevent a lifelong vicious circle of poor mental health and low socioeconomic attainment.
Directory of Open Access Journals (Sweden)
Magdalena Krajewska
2017-05-01
Full Text Available An analytical procedure for the analysis of carotenoids in marine sediments rich in organic matter has been developed. Analysis of these compounds is difficult; the application of methods used by other authors required optimization for the samples studied here. The analytical procedure involved multiple ultrasound-assisted extraction with acetone followed by liquid-liquid extraction (acetone extract:benzene:water - 15:1:10 v/v/v and HPLC analysis. The influence of column temperature on pigment separation and the quantification method were investigated – a temperature of 5 °C was selected for the Lichrospher 100 RP-18e column. The pigments in the sediment extract were quantified using a method based on HPLC analysis (at 450 nm and spectrophotometric measurements (at 450 nm, and extinction coefficients were determined for standard solutions at this wavelength. It is very important to use the value of the extinction coefficient appropriate to the wavelength at which the detection of carotenoids was carried out.
Sample problems for the novice user of the AMPX-II system
International Nuclear Information System (INIS)
Ford, W.E. III; Roussin, R.W.; Petrie, L.M.; Diggs, B.R.; Comolander, H.E.
1979-01-01
Contents of the IBM version of the APMX system distributed by the Radiation Shielding Information Center (APMX-II) are described. Sample problems which demonstrate the procedure for implementing AMPX-II modules to generate point cross sections; generate multigroup neutron, photon production, and photon interaction cross sections for various transport codes; collapse multigroup cross sections; check, edit, and punch multigroup cross sections; and execute a one-dimensional discrete ordinates transport calculation are detailed. 25 figures, 9 tables
Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics
Pohorille, Andrew
2006-01-01
The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described
Salvatore, Jessica E; Aliev, Fazil; Edwards, Alexis C; Evans, David M; Macleod, John; Hickman, Matthew; Lewis, Glyn; Kendler, Kenneth S; Loukola, Anu; Korhonen, Tellervo; Latvala, Antti; Rose, Richard J; Kaprio, Jaakko; Dick, Danielle M
2014-04-10
Alcohol problems represent a classic example of a complex behavioral outcome that is likely influenced by many genes of small effect. A polygenic approach, which examines aggregate measured genetic effects, can have predictive power in cases where individual genes or genetic variants do not. In the current study, we first tested whether polygenic risk for alcohol problems-derived from genome-wide association estimates of an alcohol problems factor score from the age 18 assessment of the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 4304 individuals of European descent; 57% female)-predicted alcohol problems earlier in development (age 14) in an independent sample (FinnTwin12; n = 1162; 53% female). We then tested whether environmental factors (parental knowledge and peer deviance) moderated polygenic risk to predict alcohol problems in the FinnTwin12 sample. We found evidence for both polygenic association and for additive polygene-environment interaction. Higher polygenic scores predicted a greater number of alcohol problems (range of Pearson partial correlations 0.07-0.08, all p-values ≤ 0.01). Moreover, genetic influences were significantly more pronounced under conditions of low parental knowledge or high peer deviance (unstandardized regression coefficients (b), p-values (p), and percent of variance (R2) accounted for by interaction terms: b = 1.54, p = 0.02, R2 = 0.33%; b = 0.94, p = 0.04, R2 = 0.30%, respectively). Supplementary set-based analyses indicated that the individual top single nucleotide polymorphisms (SNPs) contributing to the polygenic scores were not individually enriched for gene-environment interaction. Although the magnitude of the observed effects are small, this study illustrates the usefulness of polygenic approaches for understanding the pathways by which measured genetic predispositions come together with environmental factors to predict complex behavioral outcomes.
Zhong, Xiang Li; Schilling, Sibylle; Zaluzec, Nestor J; Burke, M Grace
2016-12-01
In recent years, an increasing number of studies utilizing in situ liquid and/or gaseous cell scanning/transmission electron microscopy (S/TEM) have been reported. Because of the difficulty in the preparation of suitable specimens, these environmental S/TEM studies have been generally limited to studies of nanoscale structured materials such as nanoparticles, nanowires, or sputtered thin films. In this paper, we present two methodologies which have been developed to facilitate the preparation of electron-transparent samples from conventional bulk metals and alloys for in situ liquid/gaseous cell S/TEM experiments. These methods take advantage of combining sequential electrochemical jet polishing followed by focused ion beam extraction techniques to create large electron-transparent areas for site-specific observation. As an example, we illustrate the application of this methodology for the preparation of in situ specimens from a cold-rolled Type 304 austenitic stainless steel sample, which was subsequently examined in both 1 atm of air as well as fully immersed in a H2O environment in the S/TEM followed by hyperspectral imaging. These preparation techniques can be successfully applied as a general procedure for a wide range of metals and alloys, and are suitable for a variety of in situ analytical S/TEM studies in both aqueous and gaseous environments.
Togola, Anne; Coureau, Charlotte; Guezennec, Anne-Gwenaëlle; Touzé, Solène
2015-05-01
The presence of acrylamide in natural systems is of concern from both environmental and health points of view. We developed an accurate and robust analytical procedure (offline solid phase extraction combined with UPLC/MS/MS) with a limit of quantification (20 ng L(-1)) compatible with toxicity threshold values. The optimized (considering the nature of extraction phases, sampling volumes, and solvent of elution) solid phase extraction (SPE) was validated according to ISO Standard ISO/IEC 17025 on groundwater, surface water, and industrial process water samples. Acrylamide is highly polar, which induces a high variability during the SPE step, therefore requiring the use of C(13)-labeled acrylamide as an internal standard to guarantee the accuracy and robustness of the method (uncertainty about 25 % (k = 2) at limit of quantification level). The specificity of the method and the stability of acrylamide were studied for these environmental media, and it was shown that the method is suitable for measuring acrylamide in environmental studies.
The electron transport problem sampling by Monte Carlo individual collision technique
International Nuclear Information System (INIS)
Androsenko, P.A.; Belousov, V.I.
2005-01-01
The problem of electron transport is of most interest in all fields of the modern science. To solve this problem the Monte Carlo sampling has to be used. The electron transport is characterized by a large number of individual interactions. To simulate electron transport the 'condensed history' technique may be used where a large number of collisions are grouped into a single step to be sampled randomly. Another kind of Monte Carlo sampling is the individual collision technique. In comparison with condensed history technique researcher has the incontestable advantages. For example one does not need to give parameters altered by condensed history technique like upper limit for electron energy, resolution, number of sub-steps etc. Also the condensed history technique may lose some very important tracks of electrons because of its limited nature by step parameters of particle movement and due to weakness of algorithms for example energy indexing algorithm. There are no these disadvantages in the individual collision technique. This report presents some sampling algorithms of new version BRAND code where above mentioned technique is used. All information on electrons was taken from Endf-6 files. They are the important part of BRAND. These files have not been processed but directly taken from electron information source. Four kinds of interaction like the elastic interaction, the Bremsstrahlung, the atomic excitation and the atomic electro-ionization were considered. In this report some results of sampling are presented after comparison with analogs. For example the endovascular radiotherapy problem (P2) of QUADOS2002 was presented in comparison with another techniques that are usually used. (authors)
The electron transport problem sampling by Monte Carlo individual collision technique
Energy Technology Data Exchange (ETDEWEB)
Androsenko, P.A.; Belousov, V.I. [Obninsk State Technical Univ. of Nuclear Power Engineering, Kaluga region (Russian Federation)
2005-07-01
The problem of electron transport is of most interest in all fields of the modern science. To solve this problem the Monte Carlo sampling has to be used. The electron transport is characterized by a large number of individual interactions. To simulate electron transport the 'condensed history' technique may be used where a large number of collisions are grouped into a single step to be sampled randomly. Another kind of Monte Carlo sampling is the individual collision technique. In comparison with condensed history technique researcher has the incontestable advantages. For example one does not need to give parameters altered by condensed history technique like upper limit for electron energy, resolution, number of sub-steps etc. Also the condensed history technique may lose some very important tracks of electrons because of its limited nature by step parameters of particle movement and due to weakness of algorithms for example energy indexing algorithm. There are no these disadvantages in the individual collision technique. This report presents some sampling algorithms of new version BRAND code where above mentioned technique is used. All information on electrons was taken from Endf-6 files. They are the important part of BRAND. These files have not been processed but directly taken from electron information source. Four kinds of interaction like the elastic interaction, the Bremsstrahlung, the atomic excitation and the atomic electro-ionization were considered. In this report some results of sampling are presented after comparison with analogs. For example the endovascular radiotherapy problem (P2) of QUADOS2002 was presented in comparison with another techniques that are usually used. (authors)
Energy Technology Data Exchange (ETDEWEB)
Marazuela, M.D., E-mail: marazuela@quim.ucm.es [Department of Analytical Chemistry, Faculty of Chemistry, Universidad Complutense de Madrid, E-28040 Madrid (Spain); Bogialli, S [Department of Chemistry, University of Rome ' La Sapienza' , Piazza Aldo Moro, 5 00185 Rome (Italy)
2009-07-10
The determination of trace residues and contaminants in food has been of growing concern over the past few years. Residual antibacterials in food constitute a risk to human health, especially because they can contribute to the transmission of antibiotic-resistant pathogenic bacteria through the food chain. Therefore, to ensure food safety EU and USA regulatory agencies have established lists of forbidden or banned substances and tolerance levels for authorized veterinary drugs (e.g. antibacterials). In addition, the EU Commission Decision 2002/657/EC has set requirements about the performance of analytical methods for the determination of veterinary drug residues in food and feedstuffs. During the past years, the use of powerful mass spectrometric detectors in combination with innovative chromatographic technologies has solved many problems related to sensitivity and selectivity of this type of analysis. However sample preparation still remains as the bottleneck step, mainly in terms of analysis time and sources of error. This review covering research published between 2004 and 2008 intends to provide an update overview of the past five years, on recent trends in sample preparation for the determination of antibacterial residues in foods, making special emphasis in on-line, high-throughput, multi-class methods and including several applications in detail.
Self-recognition of mental health problems in a rural Australian sample.
Handley, Tonelle E; Lewin, Terry J; Perkins, David; Kelly, Brian
2018-04-19
Although mental health literacy has increased in recent years, mental illness is often under-recognised. There has been little research conducted on mental illness in rural areas; however, this can be most prominent in rural areas due to factors such as greater stigma and stoicism. The aim of this study is to create a profile of those who are most and least likely to self-identify mental health problems among rural residents with moderate- to-high psychological distress. Secondary analysis of a longitudinal postal survey. Rural and remote New South Wales, Australia. Four-hundred-and-seventy-two community residents. Participants completed the K10 Psychological Distress Scale, as well as the question 'In the past 12 months have you experienced any mental health problems?' The characteristics of those who reported moderate/high distress scores were explored by comparing those who did and did not experience mental health problems recently. Of the 472 participants, 319 (68%) with moderate/high distress reported a mental health problem. Reporting a mental health problem was higher among those with recent adverse life events or who perceived more stress from life events while lower among those who attributed their symptoms to a physical cause. Among a rural sample with moderate/high distress, one-third did not report a mental health problem. Results suggest a threshold effect, whereby mental health problems are more likely to be acknowledged in the context of additional life events. Ongoing public health campaigns are necessary to ensure that symptoms of mental illness are recognised in the multiple forms that they take. © 2018 National Rural Health Alliance Ltd.
Energy Technology Data Exchange (ETDEWEB)
Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)
2015-04-15
Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.
Directory of Open Access Journals (Sweden)
Cosgrave Elizabeth M
2008-09-01
Full Text Available Abstract Background The Mood and Anxiety Symptom Questionnaire (MASQ was designed to specifically measure the Tripartite model of affect and is proposed to offer a delineation between the core components of anxiety and depression. Factor analytic data from adult clinical samples has shown mixed results; however no studies employing confirmatory factor analysis (CFA have supported the predicted structure of distinct Depression, Anxiety and General Distress factors. The Tripartite model has not been validated in a clinical sample of older adolescents and young adults. The aim of the present study was to examine the validity of the Tripartite model using scale-level data from the MASQ and correlational and confirmatory factor analysis techniques. Methods 137 young people (M = 17.78, SD = 2.63 referred to a specialist mental health service for adolescents and young adults completed the MASQ and diagnostic interview. Results All MASQ scales were highly inter-correlated, with the lowest correlation between the depression- and anxiety-specific scales (r = .59. This pattern of correlations was observed for all participants rating for an Axis-I disorder but not for participants without a current disorder (r = .18. Confirmatory factor analyses were conducted to evaluate the model fit of a number of solutions. The predicted Tripartite structure was not supported. A 2-factor model demonstrated superior model fit and parsimony compared to 1- or 3-factor models. These broad factors represented Depression and Anxiety and were highly correlated (r = .88. Conclusion The present data lend support to the notion that the Tripartite model does not adequately explain the relationship between anxiety and depression in all clinical populations. Indeed, in the present study this model was found to be inappropriate for a help-seeking community sample of older adolescents and young adults.
Directory of Open Access Journals (Sweden)
Jessica E. Salvatore
2014-04-01
Full Text Available Alcohol problems represent a classic example of a complex behavioral outcome that is likely influenced by many genes of small effect. A polygenic approach, which examines aggregate measured genetic effects, can have predictive power in cases where individual genes or genetic variants do not. In the current study, we first tested whether polygenic risk for alcohol problems—derived from genome-wide association estimates of an alcohol problems factor score from the age 18 assessment of the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 4304 individuals of European descent; 57% female—predicted alcohol problems earlier in development (age 14 in an independent sample (FinnTwin12; n = 1162; 53% female. We then tested whether environmental factors (parental knowledge and peer deviance moderated polygenic risk to predict alcohol problems in the FinnTwin12 sample. We found evidence for both polygenic association and for additive polygene-environment interaction. Higher polygenic scores predicted a greater number of alcohol problems (range of Pearson partial correlations 0.07–0.08, all p-values ≤ 0.01. Moreover, genetic influences were significantly more pronounced under conditions of low parental knowledge or high peer deviance (unstandardized regression coefficients (b, p-values (p, and percent of variance (R2 accounted for by interaction terms: b = 1.54, p = 0.02, R2 = 0.33%; b = 0.94, p = 0.04, R2 = 0.30%, respectively. Supplementary set-based analyses indicated that the individual top single nucleotide polymorphisms (SNPs contributing to the polygenic scores were not individually enriched for gene-environment interaction. Although the magnitude of the observed effects are small, this study illustrates the usefulness of polygenic approaches for understanding the pathways by which measured genetic predispositions come together with environmental factors to predict complex behavioral outcomes.
Directory of Open Access Journals (Sweden)
Marwan Fahs
2018-02-01
Full Text Available The Henry problem (HP continues to play a useful role in theoretical and practical studies related to seawater intrusion (SWI into coastal aquifers. The popularity of this problem is attributed to its simplicity and precision to the existence of semi-analytical (SA solutions. The first SA solution has been developed for a high uniform diffusion coefficient. Several further studies have contributed more realistic solutions with lower diffusion coefficients or velocity-dependent dispersion. All the existing SA solutions are limited to homogenous and isotropic domains. This work attempts to improve the realism of the SA solution of the dispersive HP by extending it to heterogeneous and anisotropic coastal aquifers. The solution is obtained using the Fourier series method. A special hydraulic conductivity–depth model describing stratified heterogeneity is used for mathematical convenience. An efficient technique is developed to solve the flow and transport equations in the spectral space. With this technique, we show that the HP can be solved in the spectral space with the salt concentration as primary unknown. Several examples are generated, and the SA solutions are compared against an in-house finite element code. The results provide high-quality data assessed by quantitative indicators that can be effectively used for code verification in realistic configurations of heterogeneity and anisotropy. The SA solution is used to explain contradictory results stated in the previous works about the effect of anisotropy on the saltwater wedge. It is also used to investigate the combined influence of stratification and anisotropy on relevant metrics characterizing SWI. At a constant gravity number, anisotropy leads to landward migration of the saltwater wedge, more intense saltwater flux, a wider mixing zone and shallower groundwater discharge zone to the sea. The influence of stratified heterogeneity is more pronounced in highly anisotropic aquifers. The
Human-machine analytics for closed-loop sense-making in time-dominant cyber defense problems
Henry, Matthew H.
2017-05-01
Many defense problems are time-dominant: attacks progress at speeds that outpace human-centric systems designed for monitoring and response. Despite this shortcoming, these well-honed and ostensibly reliable systems pervade most domains, including cyberspace. The argument that often prevails when considering the automation of defense is that while technological systems are suitable for simple, well-defined tasks, only humans possess sufficiently nuanced understanding of problems to act appropriately under complicated circumstances. While this perspective is founded in verifiable truths, it does not account for a middle ground in which human-managed technological capabilities extend well into the territory of complex reasoning, thereby automating more nuanced sense-making and dramatically increasing the speed at which it can be applied. Snort1 and platforms like it enable humans to build, refine, and deploy sense-making tools for network defense. Shortcomings of these platforms include a reliance on rule-based logic, which confounds analyst knowledge of how bad actors behave with the means by which bad behaviors can be detected, and a lack of feedback-informed automation of sensor deployment. We propose an approach in which human-specified computational models hypothesize bad behaviors independent of indicators and then allocate sensors to estimate and forecast the state of an intrusion. State estimates and forecasts inform the proactive deployment of additional sensors and detection logic, thereby closing the sense-making loop. All the while, humans are on the loop, rather than in it, permitting nuanced management of fast-acting automated measurement, detection, and inference engines. This paper motivates and conceptualizes analytics to facilitate this human-machine partnership.
Walach, Harald; Loef, Martin
2015-11-01
The hierarchy of evidence presupposes linearity and additivity of effects, as well as commutativity of knowledge structures. It thereby implicitly assumes a classical theoretical model. This is an argumentative article that uses theoretical analysis based on pertinent literature and known facts to examine the standard view of methodology. We show that the assumptions of the hierarchical model are wrong. The knowledge structures gained by various types of studies are not sequentially indifferent, that is, do not commute. External validity and internal validity are at least partially incompatible concepts. Therefore, one needs a different theoretical structure, typical of quantum-type theories, to model this situation. The consequence of this situation is that the implicit assumptions of the hierarchical model are wrong, if generalized to the concept of evidence in total. The problem can be solved by using a matrix-analytical approach to synthesizing evidence. Here, research methods that produce different types of evidence that complement each other are synthesized to yield the full knowledge. We show by an example how this might work. We conclude that the hierarchical model should be complemented by a broader reasoning in methodology. Copyright © 2015 Elsevier Inc. All rights reserved.
Rus, David L.; Patton, Charles J.; Mueller, David K.; Crawford, Charles G.
2013-01-01
concentrations up to 750 mg/L (median of -4.4 percent). This lent support to the laboratory-experiment finding that some particulate nitrogen is sequestered during the digestion process, and demonstrated that negative TN-A bias was present in samples with very low suspended-sediment concentrations. At sediment concentrations above 750 mg/L, the negative TN-A bias became more likely and larger (median of -13.2 percent), suggesting a secondary mechanism of bias, such as reagent limitation. From a geospatial perspective, trends in TN-A bias were not explained by selected basin characteristics. Though variable, TN-K bias generally was positive in the synoptic-field study (median of 3.1 percent), probably as a result of the reduction of nitrate. Three alternative approaches for assessing TN in surface water were evaluated for their impacts on existing and future sampling programs. Replacing TN-A with TN-C would remove the bias from subsequent data, but this approach also would introduce discontinuity in historical records. Replacing TN-K with TN-C would lead to the removal of positive bias in TN-K in the presence of elevated nitrate. However, in addition to the issues that may arise from a discontinuity in the data record, this approach may not be applicable to regulatory programs that require the use of total Kjeldahl nitrogen for stream assessment. By adding TN-C to existing TN-A or TN-K analyses, historical-data continuity would be preserved and the transitional period could be used to minimize the impact of bias on data analyses. This approach, however, imposes the greatest burdens on field operations and in terms of analytical costs. The variation in these impacts on different sampling programs will challenge U.S. Geological Survey scientists attempting to establish uniform standards for TN sample collection and analytical determinations.
Does parent-child agreement vary based on presenting problems? Results from a UK clinical sample.
Cleridou, Kalia; Patalay, Praveetha; Martin, Peter
2017-01-01
Discrepancies are often found between child and parent reports of child psychopathology, nevertheless the role of the child's presenting difficulties in relation to these is underexplored. This study investigates whether parent-child agreement on the conduct and emotional scales of the Strengths and Difficulties Questionnaire (SDQ) varied as a result of certain child characteristics, including the child's presenting problems to clinical services, age and gender. The UK-based sample consisted of 16,754 clinical records of children aged 11-17, the majority of which were female (57%) and White (76%). The dataset was provided by the Child Outcomes Research Consortium , which collects outcome measures from child services across the UK. Clinicians reported the child's presenting difficulties, and parents and children completed the SDQ. Using correlation analysis, the main findings indicated that agreement varied as a result of the child's difficulties for reports of conduct problems, and this seemed to be related to the presence or absence of externalising difficulties in the child's presentation. This was not the case for reports of emotional difficulties. In addition, agreement was higher when reporting problems not consistent with the child's presentation; for instance, agreement on conduct problems was greater for children presenting with internalising problems. Lastly, the children's age and gender did not seem to have an impact on agreement. These findings demonstrate that certain child presenting difficulties, and in particular conduct problems, may be related to informant agreement and need to be considered in clinical practice and research. Trial Registration This study was observational and as such did not require trial registration.
Local entropy as a measure for sampling solutions in constraint satisfaction problems
International Nuclear Information System (INIS)
Baldassi, Carlo; Ingrosso, Alessandro; Lucibello, Carlo; Saglietti, Luca; Zecchina, Riccardo
2016-01-01
We introduce a novel entropy-driven Monte Carlo (EdMC) strategy to efficiently sample solutions of random constraint satisfaction problems (CSPs). First, we extend a recent result that, using a large-deviation analysis, shows that the geometry of the space of solutions of the binary perceptron learning problem (a prototypical CSP), contains regions of very high-density of solutions. Despite being sub-dominant, these regions can be found by optimizing a local entropy measure. Building on these results, we construct a fast solver that relies exclusively on a local entropy estimate, and can be applied to general CSPs. We describe its performance not only for the perceptron learning problem but also for the random K-satisfiabilty problem (another prototypical CSP with a radically different structure), and show numerically that a simple zero-temperature Metropolis search in the smooth local entropy landscape can reach sub-dominant clusters of optimal solutions in a small number of steps, while standard Simulated Annealing either requires extremely long cooling procedures or just fails. We also discuss how the EdMC can heuristically be made even more efficient for the cases we studied. (paper: disordered systems, classical and quantum)
Heydt, Carina; Fassunke, Jana; Künstlinger, Helen; Ihle, Michaela Angelika; König, Katharina; Heukamp, Lukas Carl; Schildhaus, Hans-Ulrich; Odenthal, Margarete; Büttner, Reinhard; Merkelbach-Bruse, Sabine
2014-01-01
Over the last years, massively parallel sequencing has rapidly evolved and has now transitioned into molecular pathology routine laboratories. It is an attractive platform for analysing multiple genes at the same time with very little input material. Therefore, the need for high quality DNA obtained from automated DNA extraction systems has increased, especially to those laboratories which are dealing with formalin-fixed paraffin-embedded (FFPE) material and high sample throughput. This study evaluated five automated FFPE DNA extraction systems as well as five DNA quantification systems using the three most common techniques, UV spectrophotometry, fluorescent dye-based quantification and quantitative PCR, on 26 FFPE tissue samples. Additionally, the effects on downstream applications were analysed to find the most suitable pre-analytical methods for massively parallel sequencing in routine diagnostics. The results revealed that the Maxwell 16 from Promega (Mannheim, Germany) seems to be the superior system for DNA extraction from FFPE material. The extracts had a 1.3–24.6-fold higher DNA concentration in comparison to the other extraction systems, a higher quality and were most suitable for downstream applications. The comparison of the five quantification methods showed intermethod variations but all methods could be used to estimate the right amount for PCR amplification and for massively parallel sequencing. Interestingly, the best results in massively parallel sequencing were obtained with a DNA input of 15 ng determined by the NanoDrop 2000c spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). No difference could be detected in mutation analysis based on the results of the quantification methods. These findings emphasise, that it is particularly important to choose the most reliable and constant DNA extraction system, especially when using small biopsies and low elution volumes, and that all common DNA quantification techniques can be used for
Directory of Open Access Journals (Sweden)
Carina Heydt
Full Text Available Over the last years, massively parallel sequencing has rapidly evolved and has now transitioned into molecular pathology routine laboratories. It is an attractive platform for analysing multiple genes at the same time with very little input material. Therefore, the need for high quality DNA obtained from automated DNA extraction systems has increased, especially to those laboratories which are dealing with formalin-fixed paraffin-embedded (FFPE material and high sample throughput. This study evaluated five automated FFPE DNA extraction systems as well as five DNA quantification systems using the three most common techniques, UV spectrophotometry, fluorescent dye-based quantification and quantitative PCR, on 26 FFPE tissue samples. Additionally, the effects on downstream applications were analysed to find the most suitable pre-analytical methods for massively parallel sequencing in routine diagnostics. The results revealed that the Maxwell 16 from Promega (Mannheim, Germany seems to be the superior system for DNA extraction from FFPE material. The extracts had a 1.3-24.6-fold higher DNA concentration in comparison to the other extraction systems, a higher quality and were most suitable for downstream applications. The comparison of the five quantification methods showed intermethod variations but all methods could be used to estimate the right amount for PCR amplification and for massively parallel sequencing. Interestingly, the best results in massively parallel sequencing were obtained with a DNA input of 15 ng determined by the NanoDrop 2000c spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA. No difference could be detected in mutation analysis based on the results of the quantification methods. These findings emphasise, that it is particularly important to choose the most reliable and constant DNA extraction system, especially when using small biopsies and low elution volumes, and that all common DNA quantification techniques can
Wroble, Julie; Frederick, Timothy; Frame, Alicia; Vallero, Daniel
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)'s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete ("grab") samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples.
Non-erotic thoughts, attentional focus, and sexual problems in a community sample.
Nelson, Andrea L; Purdon, Christine
2011-04-01
According to Barlow's model of sexual dysfunction, anxiety in sexual situations leads to attentional focus on sexual performance at the expense of erotic cues, which compromises sexual arousal. This negative experience will enhance anxiety in future sexual situations, and non-erotic thoughts (NETs) relevant to performance will receive attentional priority. Previous research with student samples (Purdon & Holdaway, 2006; Purdon & Watson, 2010) has found that people experience many types of NETs in addition to performance-relevant thoughts, and that, consistent with Barlow's model, the frequency of and anxiety evoked by these thoughts is positively associated with sexual problems. Extending this previous work, the current study found that, in a community sample of women (N = 81) and men (N = 72) in long-term relationships, women were more likely to report body image concerns and external consequences of the sexual activity, while men were more likely to report performance-related concerns. Equally likely among men and women were thoughts about emotional consequences of the sexual activity. Regardless of thought content, experiencing more frequent NETs was associated with more sexual problems in both women and men. Moreover, as per Barlow's model, greater negative affect in anticipation of and during sexual activity predicted greater frequency of NETs and greater anxiety in response to NETs was associated with greater difficulty dismissing the thoughts. However, greater difficulty in refocusing on erotic thoughts during sexual activity uniquely predicted more sexual problems above the frequency and dismissability of NETs. Together, these data support the cognitive interference mechanism implicated by Barlow's causal model of sexual dysfunction and have implications for the treatment of sexual problems.
International Nuclear Information System (INIS)
1994-01-01
The DOE Analytical Laboratory Capacity Study was conducted to give EM-263 current information about existing and future analytical capacities and capabilities of site laboratories within the DOE Complex. Each DOE site may have one or more analytical laboratories in operation. These facilities were established to support site missions such as production, research and development, and personnel and environmental monitoring. With changing site missions and the DOE directives for environmental monitoring and cleanup, these laboratories are either devoting or planning to devote resources to support EM activities. The DOE site laboratories represent a considerable amount of capital investment and analytical capability, capacity, and expertise that can be applied to support the EM mission. They not only provide cost-effective high-volume analytical laboratory services, but are also highly recognized analytical research and development centers. Several sites have already transferred their analytical capability from traditional production support to environmental monitoring and waste management support. A model was developed to determine the analytical capacity of all laboratories in the DOE Complex. The model was applied at nearly all the major laboratories and the results collected from these studies are summarized in this report
International Nuclear Information System (INIS)
Kulich, N.V.; Nemtsev, V.A.
1986-01-01
The analytical solution to the problem on the stationary temperature field in an infinite structural element of rectangular profile characteristic of the conjugation points of a vessel and a tube sheet of a heat exchanger (or of a finned surface) at the third-kind boundary conditions has been obtained by the methods of the complex variable function theory. With the help of the obtained analytical dependences the calculations of the given element of the design and the comparison with the known data have been conducted. The proposed analytical solution can be effectively used in calculations of temperature fields in finned surfaces and structural elements of the power equipment of the considered profile and the method is applied for solution of the like problems
Salater, Julie; Røhr, Marthe
2010-01-01
Objective : To examine (a) the prevalence of sleep problems among 4-year-olds in the general population, (b) the prevalence of sleep problems among children with emotional and/or behavioural problems, and (c) whether specific sleep problems are associated with particular emotional/behavioural problems. Method: Using The Preschool Age Psychiatric Assessment (PAPA) , data about sleep and emotional/behavioural problems was obtained from 727 parents of 4-year-olds, recruited for a large...
Bhadra, Anindya; Carroll, Raymond J
2016-07-01
In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.
Directory of Open Access Journals (Sweden)
Ntina Kourmousi
2016-06-01
Full Text Available The Problem Solving Inventory (PSI is designed to measure adults’ perceptions of problem-solving ability. The presented study aimed to translate it and assess its reliability and validity in a nationwide sample of 3668 Greek educators. In order to evaluate internal consistency reliability, Cronbach’s alpha coefficient was used. The scale’s construct validity was examined by a confirmatory factor analysis (CFA and by investigating its correlation with the Internality, Powerful others and Chance Multidimensional Locus of Control Scale (IPC LOC Scale, the Rosenberg Self-Esteem Scale (RSES and demographic information. Internal consistency reliability was satisfactory with Cronbach’s alphas ranging from 0.79 to 0.91 for all PSI scales. CFA confirmed that the bi-level model fitted the data well. The root mean square error of approximation (RMSEA, the comparative fit index (CFI and the goodness of fit index (GFI values were 0.030, 0.97 and 0.96, respectively, further confirming the bi-level model and the three-factors construct of the PSI. Intercorrelations and correlation coefficients between the PSI, the IPC LOC Scale and the RSES were significant. Age, sex, and working experience differences were found. In conclusion, the Greek version of the PSI was found to have satisfactory psychometric properties and therefore, it can be used to evaluate Greek teachers’ perceptions of their problem-solving skills.
International Nuclear Information System (INIS)
Voitsekhovych, Oleg V.; Lavrova, Tatiana V.; Kostezh, Alexander B.
2012-01-01
There are many sites in the world, where Environment are still under influence of the contamination related to the Uranium production carried out in past. Author's experience shows that lack of site characterization data, incomplete or unreliable environment monitoring studies can significantly limit quality of Safety Assessment procedures and Priority actions analyses needed for Remediation Planning. During recent decades the analytical laboratories of the many enterprises, currently being responsible for establishing the site specific environment monitoring program have been significantly improved their technical sampling and analytical capacities. However, lack of experience in the optimal site specific sampling strategy planning and also not enough experience in application of the required analytical techniques, such as modern alpha-beta radiometers, gamma and alpha spectrometry and liquid-scintillation analytical methods application for determination of U-Th series radionuclides in the environment, does not allow to these laboratories to develop and conduct efficiently the monitoring programs as a basis for further Safety Assessment in decision making procedures. This paper gives some conclusions, which were gained from the experience establishing monitoring programs in Ukraine and also propose some practical steps on optimization in sampling strategy planning and analytical procedures to be applied for the area required Safety assessment and justification for its potential remediation and safe management. (authors)
Directory of Open Access Journals (Sweden)
Stephanie M. Pendergrass
2016-01-01
Full Text Available A novel methodology is described for the sampling and analysis of diacetyl, 2,3-pentanedione, 2,3-hexanedione, and 2,3-heptanedione. These analytes were collected on o-phenylenediamine-treated silica gel tubes and quantitatively recovered as the corresponding quinoxaline derivatives. After derivatization, the sorbent was desorbed in 3 mL of ethanol solvent and analyzed using gas chromatography/nitrogen-phosphorous detection (GC/NPD. The limits of detection (LOD achieved for each analyte were determined to be in the range of 5–10 nanograms/sample. Evaluation of the on-tube derivatization procedure indicated that it is unaffected by humidities ranging from 20% to 80% and that the derivatization procedure was quantitative for analyte concentrations ranging from 0.1 μg to approximately 500 μg per sample. Storage stability studies indicated that the derivatives were stable for 30 days when stored at both ambient and refrigerated temperatures. Additional studies showed that the quinoxaline derivatives were quantitatively recovered when sampling up to a total volume of 72 L at a sampling rate of 50 cc/min. This method will be important to evaluate and monitor worker exposures in the food and flavoring industry. Samples can be collected over an 8-hour shift with up to 288 L total volume collected regardless of time, sampling rate, and/or the effects of humidity.
Czech Academy of Sciences Publication Activity Database
Pazzagli, M.; Malentacchi, F.; Simi, L.; Wyrich, R.; Guenther, K.; Hartmann, C.; Verderio, P.; Pizzamiglio, S.; Ciniselli, C.M.; Tichopád, Aleš; Kubista, Mikael; Gelmini, S.
2013-01-01
Roč. 59, č. 1 (2013), s. 20-31 ISSN 1046-2023 Institutional research plan: CEZ:AV0Z50520701 Keywords : Pre-analytical phase * RNA quality * Blood samples Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 3.221, year: 2013
Czech Academy of Sciences Publication Activity Database
Malentacchi, F.; Pazzagli, M.; Simi, L.; Orlando, C.; Wyrich, R.; Hartmann, C.C.; Verderio, P.; Pizzamiglio, S.; Ciniselli, C.M.; Tichopád, Aleš; Kubista, Mikael; Gelmini, S.
-, č. 424 (2013), s. 274-286 ISSN 0009-8981 Institutional research plan: CEZ:AV0Z50520701 Keywords : Pre-analytical phase * DNA quality * Blood samples Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 2.764, year: 2013
Directory of Open Access Journals (Sweden)
Øren Anita
2008-12-01
Full Text Available Abstract Background Prior studies on the impact of problem gambling in the family mainly include help-seeking populations with small numbers of participants. The objective of the present stratified probability sample study was to explore the epidemiology of problem gambling in the family in the general population. Methods Men and women 16–74 years-old randomly selected from the Norwegian national population database received an invitation to participate in this postal questionnaire study. The response rate was 36.1% (3,483/9,638. Given the lack of validated criteria, two survey questions ("Have you ever noticed that a close relative spent more and more money on gambling?" and "Have you ever experienced that a close relative lied to you about how much he/she gambles?" were extrapolated from the Lie/Bet Screen for pathological gambling. Respondents answering "yes" to both questions were defined as Concerned Significant Others (CSOs. Results Overall, 2.0% of the study population was defined as CSOs. Young age, female gender, and divorced marital status were factors positively associated with being a CSO. CSOs often reported to have experienced conflicts in the family related to gambling, worsening of the family's financial situation, and impaired mental and physical health. Conclusion Problematic gambling behaviour not only affects the gambling individual but also has a strong impact on the quality of life of family members.
Mancini, Vincent O; Rigoli, Daniela; Heritage, Brody; Roberts, Lynne D; Piek, Jan P
2016-01-01
Poor motor skills are associated with a range of psychosocial consequences, including internalizing (anxious and depressive) symptoms. The Elaborated Environmental Stress Hypothesis provides a causal framework to explain this association. The framework posits that motor skills impact internalizing problems through an indirect effect via perceived social support. However, empirical evaluation is required. We examined whether motor skills had an indirect effect on anxious and depressive symptoms via perceived family support domains. This study used a community sample of 93 adolescents (12-16 years). Participants completed measures of motor skills, perceived social support across three dimensions (family, friend, and significant other), depressive symptoms, and anxious symptoms. Age, gender, verbal IQ, and ADHD symptoms were included as control variables. Regression analysis using PROCESS revealed that motor skills had an indirect effect on depressive symptoms via perceived family support, but not by perceived friend support or significant other support. The negative association between motor skills and anxious symptoms was not mediated by any perceived social support domain. Findings are consistent with previous literature indicating an association between motor skills and internalizing problems. However, we identified a different pattern of relationships across anxious and depressive symptoms. While anxiety and depressive symptoms were highly correlated, motor skills had an indirect effect on depressive symptoms via perceived family support only. Our findings highlight the importance of family support as a potential protective factor in the onset of depressive symptoms. This study provides partial support for the Elaborated Environmental Stress Hypothesis, however further research is required.
International Nuclear Information System (INIS)
Schuettelkopf, H.
1981-09-01
To study the behaviour of plutonium in the environment and to measure plutonium in the vicinity of nuclear facilities, a quick, sensitive analytical method is required which can be applied to all sample materials found in the environment. For a sediment contaminated with plutonium a boiling out method using first HNO 3 /HF and subsequently HNO 3 /Al(NO 3 ) 3 was found to be successful. The leaching solution was then extracted by TOPO and the plutonium backextracted by ascorbic acid/HCl. Some different purification steps and finally electroplating using ammonium oxalate led to an optimum sample for α- spectroscopic determination of plutonium. An analytical method was worked out for plutonium which can be applied to all materials found in the environment. The sample size is 100 g but it might also be much greater. The average chemical yield is 70 and 80%. The detection limit for soil samples is 0.1 fCi/g and for plant samples 0.5 fCi/g. One technician can perform eight analyses per working day. The analytical procedure was applied to a large number of environmental samples and the results of these analyses are indicated. (orig./RB) [de
Weighted piecewise LDA for solving the small sample size problem in face verification.
Kyperountas, Marios; Tefas, Anastasios; Pitas, Ioannis
2007-03-01
A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance.
Directory of Open Access Journals (Sweden)
Monica C. Skewes
2013-08-01
Full Text Available Background. Recent research has identified the use of caffeinated energy drinks as a common, potentially risky behaviour among college students that is linked to alcohol misuse and consequences. Research also suggests that energy drink consumption is related to other risky behaviours such as tobacco use, marijuana use and risky sexual activity. Objective. This research sought to examine the associations between frequency of energy drink consumption and problematic alcohol use, alcohol-related consequences, symptoms of alcohol dependence and drinking motives in an ethnically diverse sample of college students in Alaska. We also sought to examine whether ethnic group moderated these associations in the present sample of White, Alaska Native/American Indian and other ethnic minority college students. Design. A paper-and-pencil self-report questionnaire was completed by a sample of 298 college students. Analysis of covariance (ANCOVA was used to examine the effects of energy drink use, ethnic group and energy drink by ethnic group interactions on alcohol outcomes after controlling for variance attributed to gender, age and frequency of binge drinking. Results. Greater energy drink consumption was significantly associated with greater hazardous drinking, alcohol consequences, alcohol dependence symptoms, drinking for enhancement motives and drinking to cope. There were no main effects of ethnic group, and there were no significant energy drink by ethnic group interactions. Conclusion. These findings replicate those of other studies examining the associations between energy drink use and alcohol problems, but contrary to previous research we did not find ethnic minority status to be protective. It is possible that energy drink consumption may serve as a marker for other health risk behaviours among students of various ethnic groups.
Energy Technology Data Exchange (ETDEWEB)
Haselow, L.A.; Rogers, V.A. [Westinghouse Savannah River Co., Aiken, SC (United States); Riordan, C.J. [Metcalf and Eddy, Inc. (United States); Eidson, G.W.; Herring, M.K. [Normandeau Associates, Inc. (United States)
1992-08-01
Shallow water and soils along Upper Three Runs Creek (UTRC) and associated wetlands between SRS Road F and Cato Road were sampled for nonradioactive and radioactive constituents. The sampling program is associated with risk evaluations being performed for various regulatory documents in these areas of the Savannah River Site (SRS). WSRC selected fifty sampling sites bordering the Mixed Waste Management Facility (MWMF), F- and H-Area Seepage Basins (FHSB), and the Sanitary Landfill (SL). The analytical results from this study provided information on the water and soil quality in UTRC and its associated wetlands. The analytical results from this investigation indicated that the primary constituents and radiological indicators detected in the shallow water and soils were tritium, gross alpha, radium 226, total radium and strontium 90. This investigation involved the collection of shallow water samples during the Fall of 1991 and the Spring of 1992 at fifty (50) sampling locations. Sampling was performed during these periods to incorporate high and low water table periods. Samples were collected from three sections along UTRC denoted as Phase I (MWMF), Phase II (FHSB) and Phase III (SL). One vibracored soil sample was also collected in each phase during the Fall of 1991. This document is compiled solely of experimental data obtained from the sampling procedures.
Energy Technology Data Exchange (ETDEWEB)
Haselow, L.A.; Rogers, V.A. [Westinghouse Savannah River Co., Aiken, SC (United States); Riordan, C.J. [Metcalf and Eddy (United States); Eidson, G.W.; Herring, M.K. [Normandeau Associates, Inc., Aiken, SC (United States)
1992-08-01
Shallow water and soils along Upper Three Runs Creek (UTRC) and associated wetlands between SRS Road F and Cato Road were sampled for nonradioactive and radioactive constituents. The sampling program is associated with risk evaluations being performed for various regulatory documents in these areas of the Savannah River Site (SRS). WSRC selected fifty sampling sites bordering the Mixed Waste Management Facility (MWMF), F- and H-Area Seepage Basins (FHSB), and the Sanitary Landfill (SL). The analytical results from this study provided information on the water and soil quality in UTRC and its associated wetlands. The analytical results from this investigation indicated that the primary constituents and radiological indicators detected in the shallow water and soils were tritium, gross alpha, radium 226, total radium and strontium 90. This investigation involved the collection of shallow water samples during the Fall of 1991 and the Spring of 1992 at fifty (50) sampling locations. Sampling was performed during these periods to incorporate high and low water table periods. Samples were collected from three sections along UTRC denoted as Phase I (MWMF), Phase II (FHSB) and Phase III (SL). One vibracored soil sample was also collected in each phase during the Fall of 1991. This document is compiled of experimental data obtained from the sampling procedures.
Directory of Open Access Journals (Sweden)
Gladys Arreaza
2016-09-01
Full Text Available In cancer drug discovery, it is important to investigate the genetic determinants of response or resistance to cancer therapy as well as factors that contribute to adverse events in the course of clinical trials. Despite the emergence of new technologies and the ability to measure more diverse analytes (e.g., circulating tumor cell (CTC, circulating tumor DNA (ctDNA, etc., tumor tissue is still the most common and reliable source for biomarker investigation. Because of its worldwide use and ability to preserve samples for many decades at ambient temperature, formalin-fixed, paraffin-embedded tumor tissue (FFPE is likely to be the preferred choice for tissue preservation in clinical practice for the foreseeable future. Multiple analyses are routinely performed on the same FFPE samples (such as Immunohistochemistry (IHC, in situ hybridization, RNAseq, DNAseq, TILseq, Methyl-Seq, etc.. Thus, specimen prioritization and optimization of the isolation of analytes is critical to ensure successful completion of each assay. FFPE is notorious for producing suboptimal DNA quality and low DNA yield. However, commercial vendors tend to request higher DNA sample mass than what is actually required for downstream assays, which restricts the breadth of biomarker work that can be performed. We evaluated multiple genomics service laboratories to assess the current state of NGS pre-analytical processing of FFPE. Significant differences in pre-analytical capabilities were observed. Key aspects are highlighted and recommendations are made to improve the current practice in translational research.
Andrade, Brendan F; Tannock, Rosemary
2014-06-01
This prospective 2-year longitudinal study tested whether inattentive and hyperactive/impulsive symptom dimensions predicted future peer problems, when accounting for concurrent conduct problems and prosocial skills. A community sample of 492 children (49 % female) who ranged in age from 6 to 10 years (M = 8.6, SD = .93) was recruited. Teacher reports of children's inattention, and hyperactivity/impulsivity symptoms, conduct problems, prosocial skills and peer problems were collected in two consecutive school years. Elevated inattention and hyperactivity/impulsivity in Year-1 predicted greater peer problems in Year-2. Conduct problems in the first and second years of the study were associated with more peer problems, and explained a portion of the relationship between inattention and hyperactivity/impulsivity with peer problems. However, prosocial skills were associated with fewer peer problems in children with elevated inattention and hyperactivity/impulsivity. Inattention and hyperactivity/impulsivity have negative effects on children's peer functioning after 1-year, but concurrent conduct problems and prosocial skills have important and opposing impacts on these associations.
Directory of Open Access Journals (Sweden)
Yianna Vovides
2016-06-01
Full Text Available Since the turn of the 21st century, we have seen a surge of studies on the state of U.S. education addressing issues such as cost, graduation rates, retention, achievement, engagement, and curricular outcomes. There is an expectation that graduates should be able to enter the workplace equipped to take on complex and “messy” or ill-structured problems as part of their professional and everyday life. In the context of online learning, we have identified two key issues that are elusive (hard to capture and make visible: learning with ill-structured problems and the interaction of social and individual learning. We believe that the intersection between learning and analytics has the potential, in the long-term, to minimize the elusiveness of deep learning. A proposed analytics model is described in this article that is meant to capture and also support further development of a learner’s reflective sensemaking.
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure
Li, Xiaoguang Sunny; Li, Shu; Kellermann, Gottfried
2016-10-01
It remains a challenge to simultaneously quantify catecholamines and metanephrines in a simple, sensitive and cost-effective manner due to pre-analytical and analytical constraints. Herein, we describe such a method consisting of a miniaturized sample preparation and selective LC-MS/MS detection by the use of second morning spot urine samples. Ten microliters of second morning urine sample were subjected to solid phase extraction on an Oasis HLB microplate upon complexation with phenylboronic acid. The analytes were well-resolved on a Luna PFP column followed by tandem mass spectrometric detection. Full validation and suitability of spot urine sampling and biological variation were investigated. The extraction recovery and matrix effect are 74.1-97.3% and 84.1-119.0%, respectively. The linearity range is 2.5-500, 0.5-500, 2.5-1250, 2.5-1250 and 0.5-1250ng/mL for norepinephrine, epinephrine, dopamine, normetanephrine and metanephrine, respectively. The intra- and inter-assay imprecisions are ≤9.4% for spiked quality control samples, and the respective recoveries are 97.2-112.5% and 95.9-104.0%. The Deming regression slope is 0.90-1.08, and the mean Bland-Altman percentage difference is from -3.29 to 11.85 between a published and proposed method (n=50). A correlation observed for the spot and 24h urine collections is significant (n=20, p<0.0001, r: 0.84-0.95, slope: 0.61-0.98). No statistical differences are found in day-to-day biological variability (n=20). Reference intervals are established for an apparently healthy population (n=88). The developed method, being practical, sensitive, reliable and cost-effective, is expected to set a new stage for routine testing, basic research and clinical applications. Copyright © 2016 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Vincent Oreste Mancini
2016-04-01
Full Text Available Objectives: Poor motor skills are associated with a range of psychosocial consequences, including internalizing (anxious and depressive symptoms. The Elaborated Environmental Stress Hypothesis provides a causal framework to explain this association. The framework posits that motor skills impact internalizing problems through an indirect effect via perceived social support. However, empirical evaluation is required. We examined whether motor skills has an indirect effect on anxious and depressive symptoms via perceived family support domains. Methods: This study used a community sample of 93 adolescents (12-16 years. Participants completed measures of motor skills, perceived social support across three dimensions (family, friend, and significant other, depressive symptoms, and anxious symptoms. Age, gender, verbal IQ, and ADHD symptoms were included as control variables.Results: Regression analysis using PROCESS revealed that motor skills had an indirect effect on depressive symptoms via perceived family support, but not by perceived friend support or significant other support. The negative association between motor skills and anxious symptoms was not mediated by any perceived social support domain. Conclusions: Findings are consistent with previous literature indicating an association between motor skills and internalizing problems. However, we identified a different pattern of relationships across anxious and depressive symptoms. While anxiety and depressive symptoms were highly correlated, motor skills had an indirect effect on depressive symptoms via perceived family support only. Our findings highlight the importance of family support as a potential protective factor in the onset of depressive symptoms. This study provides partial support for the Elaborated Environmental Stress Hypothesis, however further research is required.
Hodgins, David C; Schopflocher, Don P; el-Guebaly, Nady; Casey, David M; Smith, Garry J; Williams, Robert J; Wood, Robert T
2010-09-01
The association between childhood maltreatment and gambling problems was examined in a community sample of men and women (N = 1,372). As hypothesized, individuals with gambling problems reported greater childhood maltreatment than individuals without gambling problems. Childhood maltreatment predicted severity of gambling problems and frequency of gambling even when other individual and social factors were controlled including symptoms of alcohol and other drug use disorders, family environment, psychological distress, and symptoms of antisocial disorder. In contrast to findings in treatment-seeking samples, women with gambling problems did not report greater maltreatment than men with gambling problems. These results underscore the need for both increased prevention of childhood maltreatment and increased sensitivity towards trauma issues in gambling treatment programs for men and women.
Tank 241-U-103, grab samples 3U-99-1, 3u-99-2 and 3U-99-3 analytical results for the final report
International Nuclear Information System (INIS)
STEEN, F.H.
1999-01-01
This document is the final report for tank 241-U-103 grab samples. Three grab samples were collected from riser 13 on March 12, 1999 and received by the 222-S laboratory on March 15, 1999. Analyses were performed in accordance with the Compatibility Grab Sampling and Analysis Plan for Fiscal Year 1999 (TSAP) (Sasaki, 1999) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO). The analytical results are presented in the data summary report. None of the subsamples submitted for differential scanning calorimetry (DSC), total organic carbon (TOC) and plutonium 239 (Pu239) analyses exceeded the notification limits as stated in TSAP
Tempelaar, Dirk; Rienties, Bart; Nguyen, Quan
2018-01-01
The identification of students’ learning strategies by using multi-modal data that combine trace data with self-report data is the prime aim of this study. Our context is an application of dispositional learning analytics in a large introductory course mathematics and statistics, based on blended
Energy Technology Data Exchange (ETDEWEB)
Chae, Myeong Hu; Lee, Hu Jun; Kim, Ha Seok
1989-02-15
This book give explanations on analytical chemistry with ten chapters, which deal with development of analytical chemistry, the theory of error with definition and classification, sample and treatment gravimetry on general process of gravimetry in aqueous solution and non-aqueous solution, precipitation titration about precipitation reaction and types, complexometry with summary and complex compound, oxidation-reduction equilibrium on electrode potential and potentiometric titration, solvent extraction and chromatograph and experiment with basic operation for chemical experiment.
International Nuclear Information System (INIS)
Chae, Myeong Hu; Lee, Hu Jun; Kim, Ha Seok
1989-02-01
This book give explanations on analytical chemistry with ten chapters, which deal with development of analytical chemistry, the theory of error with definition and classification, sample and treatment gravimetry on general process of gravimetry in aqueous solution and non-aqueous solution, precipitation titration about precipitation reaction and types, complexometry with summary and complex compound, oxidation-reduction equilibrium on electrode potential and potentiometric titration, solvent extraction and chromatograph and experiment with basic operation for chemical experiment.
Kansi, Juliska; Wichstrom, Lars; Bergman, Lars R.
2005-01-01
The longitudinal stability of eating problems and their relationships to risk factors were investigated in a representative population sample of 623 Norwegian girls aged 13-14 followed over 7 years (3 time points). Three eating problem symptoms were measured: Restriction, Bulimia-food preoccupation, and Diet, all taken from the 12-item Eating…
Wilson, Walter B; Costa, Andréia A; Wang, Huiyong; Dias, José A; Dias, Sílvia C L; Campiglia, Andres D
2012-07-06
The analytical performance of BEA - a commercial zeolite - is evaluated for the pre-concentration of fifteen Environmental Protection Agency - polycyclic aromatic hydrocarbons and their subsequent HPLC analysis in tap and lake water samples. The pre-concentration factors obtained with BEA have led to a method with excellent analytical figures of merit. One milliliter aliquots were sufficient to obtain excellent precision of measurements at the parts-per-trillion concentration level with relative standard deviations varying from 4.1% (dibenzo[a,h]anthracene) to 13.4% (pyrene). The limits of detection were excellent as well and varied between 1.1 (anthracene) and 49.9 ng L(-1) (indeno[1,2,3-cd]pyrene). The recovery values of all the studied compounds meet the criterion for regulated polycyclic aromatic hydrocarbons, which mandates relative standard deviations equal or lower than 25%. The small volume of organic solvents (100 μL per sample) and amount of BEA (2 mg per sample) makes sample pre-concentration environmentally friendly and cost effective. The extraction procedure is well suited for numerous samples as the small working volume (1 mL) facilitates the implementation of simultaneous sample extraction. These are attractive features when routine monitoring of numerous samples is contemplated. Copyright © 2012 Elsevier B.V. All rights reserved.
Sampling solution traces for the problem of sorting permutations by signed reversals
2012-01-01
Background Traditional algorithms to solve the problem of sorting by signed reversals output just one optimal solution while the space of all optimal solutions can be huge. A so-called trace represents a group of solutions which share the same set of reversals that must be applied to sort the original permutation following a partial ordering. By using traces, we therefore can represent the set of optimal solutions in a more compact way. Algorithms for enumerating the complete set of traces of solutions were developed. However, due to their exponential complexity, their practical use is limited to small permutations. A partial enumeration of traces is a sampling of the complete set of traces and can be an alternative for the study of distinct evolutionary scenarios of big permutations. Ideally, the sampling should be done uniformly from the space of all optimal solutions. This is however conjectured to be ♯P-complete. Results We propose and evaluate three algorithms for producing a sampling of the complete set of traces that instead can be shown in practice to preserve some of the characteristics of the space of all solutions. The first algorithm (RA) performs the construction of traces through a random selection of reversals on the list of optimal 1-sequences. The second algorithm (DFALT) consists in a slight modification of an algorithm that performs the complete enumeration of traces. Finally, the third algorithm (SWA) is based on a sliding window strategy to improve the enumeration of traces. All proposed algorithms were able to enumerate traces for permutations with up to 200 elements. Conclusions We analysed the distribution of the enumerated traces with respect to their height and average reversal length. Various works indicate that the reversal length can be an important aspect in genome rearrangements. The algorithms RA and SWA show a tendency to lose traces with high average reversal length. Such traces are however rare, and qualitatively our results
Taylor, Wendy; Stacey, Kaye
2014-01-01
This article presents "The Two Children Problem," published by Martin Gardner, who wrote a famous and widely-read math puzzle column in the magazine "Scientific American," and a problem presented by puzzler Gary Foshee. This paper explains the paradox of Problems 2 and 3 and many other variations of the theme. Then the authors…
Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim
2014-01-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using
Homman, Lina E; Edwards, Alexis C; Cho, Seung Bin; Dick, Danielle M; Kendler, Kenneth S
2017-03-21
Alcohol problems and internalizing symptoms are consistently found to be associated but how they relate to each other is unclear. The present study aimed to address limitations in the literature of comorbidity of alcohol problems and internalizing symptoms by investigating the direction of effect between the phenotypes and possible gender differences in college students. We utilized data from a large longitudinal study of college students from the United States (N = 2607). Three waves of questionnaire-based data were collected over the first two years of college (in 2011-2013). Cross-lagged models were applied to examine the possible direction of effect of internalizing symptoms and alcohol problems. Possible effects of gender were investigated using multigroup modeling. There were significant correlations between alcohol problems and internalizing symptoms. A direction of effect was found between alcohol problems and internalizing symptoms but differed between genders. A unidirectional relationship varying with age was identified for males where alcohol problems initially predicted internalizing symptoms followed by internalizing symptoms predicting alcohol problems. For females, a unidirectional relationship existed wherein alcohol problems predicted internalizing symptoms. Conclusions/Importance: We conclude that the relationship between alcohol problems and internalizing symptoms is complex and differ between genders. In males, both phenotypes are predictive of each other, while in females the relationship is driven by alcohol problems. Importantly, our study examines a population-based sample, revealing that the observed relationships between alcohol problems and internalizing symptoms are not limited to individuals with clinically diagnosed mental health or substance use problems.
International Nuclear Information System (INIS)
Chung, Yong Sam; Moon, Jong Hwa; Chung, Young Ju; Jeong, Eui Sik; Lee, Sang Mi; Kang, Sang Hun; Cho, Seung Yeon; Kwon, Young Sik; Chung, Sang Wuk; Lee, Kyu Sung; Chun, Ki Hong; Kim, Nak Bae; Lee, Kil Yong; Yoon, Yoon Yeol; Chun, Sang Ki.
1997-09-01
This research report is written for results of applied research on air pollution monitoring using instrumental neutron activation analysis. For identification and standardization of analytical method, 24 environmental samples are analyzed quantitatively, and accuracy and precision of this method are measured. Using airborne particulate matter and biomonitor chosen as environmental indicators, trace elemental concentrations of sample collected at urban and rural site monthly are determined ant then the calculation of statistics and the factor analysis are carried out for investigation of emission source. Facilities for NAA are installed in a new HANARO reactor, functional test is performed for routine operation. In addition, unified software code for NAA is developed to improve accuracy, precision and abilities of analytical processes. (author). 103 refs., 61 tabs., 19 figs
2016-04-30
Warfare, Naval Sea Systems Command Acquisition Cycle Time : Defining the Problem David Tate, Institute for Defense Analyses Schedule Analytics Jennifer...research was comprised of the following high- level steps : Identify and review primary data sources 1...research. However, detailed reviews of the OMB IT Dashboard data revealed that schedule data is highly aggregated. Program start date and program end date
Alpay, Daniel
2015-01-01
This is an exercises book at the beginning graduate level, whose aim is to illustrate some of the connections between functional analysis and the theory of functions of one variable. A key role is played by the notions of positive definite kernel and of reproducing kernel Hilbert space. A number of facts from functional analysis and topological vector spaces are surveyed. Then, various Hilbert spaces of analytic functions are studied.
International Nuclear Information System (INIS)
FULLER, R.K.
1999-01-01
This document is the final report for tank 241-AP-106 grab samples. Three grab samples 6AP-98-1, 6AP-98-2 and 6AP-98-3 were taken from riser 1 of tank 241-AP-106 on May 28, 1998 and received by the 222-S Laboratory on May 28, 1998. Analyses were performed in accordance with the ''Compatability Grab Sampling and Analysis Plan'' (TSAP) (Sasaki, 1998) and the ''Data Quality Objectives for Tank Farms Waste Compatability Program (DQO). The analytical results are presented in the data summary report. No notification limits were exceeded. The request for sample analysis received for AP-106 indicated that the samples were polychlorinated biphenyl (PCB) suspects. The results of this analysis indicated that no PCBs were present at the Toxic Substance Control Act (TSCA) regulated limit of 50 ppm. The results and raw data for the PCB analysis are included in this document
Tank 241-U-102, Grab Samples 2U-99-1, 2U-99-2 and 2U-99-3 Analytical Results for the Final Report
International Nuclear Information System (INIS)
STEEN, F.H.
1999-01-01
This document is the final report for tank 241-U-102 grab samples. Five grab samples were collected from riser 13 on May 26, 1999 and received by the 222-S laboratory on May 26 and May 27, 1999. Samples 2U-99-3 and 2U-99-4 were submitted to the Process Chemistry Laboratory for special studies. Samples 2U-99-1, 2U-99-2 and 2U-99-5 were submitted to the laboratory for analyses. Analyses were performed in accordance with the Compatibility Grab Sampling and Analysis Plan for Fiscal year 1999 (TSAP) (Sasaki, 1999) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Fowler 1995, Mulkey and Miller 1998). The analytical results are presented in the data summary report. None of the subsamples submitted for differential scanning calorimetry (DSC), total organic carbon (TOC) and plutonium 239 (Pu239) analyses exceeded the notification limits as stated in TSAP
Energy Technology Data Exchange (ETDEWEB)
BELL, K.E.
2000-05-11
This document is the format IV, final report for the tank 241-SY-102 (SY-102) grab samples taken in January 2000 to address waste compatibility concerns. Chemical, radiochemical, and physical analyses on the tank SY-102 samples were performed as directed in Comparability Grab Sampling and Analysis Plan for Fiscal Year 2000 (Sasaki 1999). No notification limits were exceeded. Preliminary data on samples 2SY-99-5, -6, and -7 were reported in ''Format II Report on Tank 241-SY-102 Waste Compatibility Grab Samples Taken in January 2000'' (Lockrem 2000). The data presented here represent the final results.
Energy Technology Data Exchange (ETDEWEB)
Yokoi, T [Building Research Institute, Tokyo (Japan); Sanchez-Sesma, F [Universidad National Autonoma de Mexico, (Mexico). Institute de Ingenieria
1997-05-27
Formulation is introduced for discretizing a boundary integral equation into an indirect boundary element method for the solution of 3-dimensional topographic problems. Yokoi and Takenaka propose an analytical solution-capable reference solution (solution for the half space elastic body with flat free surface) to problems of topographic response to seismic motion in a 2-dimensional in-plane field. That is to say, they propose a boundary integral equation capable of effectively suppressing the non-physical waves that emerge in the result of computation in the wake of the truncation of the discretized ground surface making use of the wave field in a semi-infinite elastic body with flat free surface. They apply the proposed boundary integral equation discretized into the indirect boundary element method to solve some examples, and succeed in proving its validity. In this report, the equation is expanded to deal with 3-dimensional topographic problems. A problem of a P-wave vertically landing on a flat and free surface is solved by the conventional boundary integral equation and the proposed boundary integral equation, and the solutions are compared with each other. It is found that the new method, different from the conventional one, can delete non-physical waves from the analytical result. 4 figs.
Liu, Liling; Cui, Zhiyi; Deng, Yuzhong; Dean, Brian; Hop, Cornelis E C A; Liang, Xiaorong
2016-02-01
A high-performance liquid chromatography tandem mass spectrometry (LC-MS/MS) assay for the quantitative determination of NAD(+) in human whole blood using a surrogate analyte approach was developed and validated. Human whole blood was acidified using 0.5N perchloric acid at a ratio of 1:3 (v:v, blood:perchloric acid) during sample collection. 25μL of acidified blood was extracted using a protein precipitation method and the resulting extracts were analyzed using reverse-phase chromatography and positive electrospray ionization mass spectrometry. (13)C5-NAD(+) was used as the surrogate analyte for authentic analyte, NAD(+). The standard curve ranging from 0.250 to 25.0μg/mL in acidified human blood for (13)C5-NAD(+) was fitted to a 1/x(2) weighted linear regression model. The LC-MS/MS response between surrogate analyte and authentic analyte at the same concentration was obtained before and after the batch run. This response factor was not applied when determining the NAD(+) concentration from the (13)C5-NAD(+) standard curve since the percent difference was less than 5%. The precision and accuracy of the LC-MS/MS assay based on the five analytical QC levels were well within the acceptance criteria from both FDA and EMA guidance for bioanalytical method validation. Average extraction recovery of (13)C5-NAD(+) was 94.6% across the curve range. Matrix factor was 0.99 for both high and low QC indicating minimal ion suppression or enhancement. The validated assay was used to measure the baseline level of NAD(+) in 29 male and 21 female human subjects. This assay was also used to study the circadian effect of endogenous level of NAD(+) in 10 human subjects. Copyright © 2015 Elsevier B.V. All rights reserved.
A boundary-optimized rejection region test for the two-sample binomial problem.
Gabriel, Erin E; Nason, Martha; Fay, Michael P; Follmann, Dean A
2018-03-30
Testing the equality of 2 proportions for a control group versus a treatment group is a well-researched statistical problem. In some settings, there may be strong historical data that allow one to reliably expect that the control proportion is one, or nearly so. While one-sample tests or comparisons to historical controls could be used, neither can rigorously control the type I error rate in the event the true control rate changes. In this work, we propose an unconditional exact test that exploits the historical information while controlling the type I error rate. We sequentially construct a rejection region by first maximizing the rejection region in the space where all controls have an event, subject to the constraint that our type I error rate does not exceed α for any true event rate; then with any remaining α we maximize the additional rejection region in the space where one control avoids the event, and so on. When the true control event rate is one, our test is the most powerful nonrandomized test for all points in the alternative space. When the true control event rate is nearly one, we demonstrate that our test has equal or higher mean power, averaging over the alternative space, than a variety of well-known tests. For the comparison of 4 controls and 4 treated subjects, our proposed test has higher power than all comparator tests. We demonstrate the properties of our proposed test by simulation and use our method to design a malaria vaccine trial. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Huszank, Robert; Csedreki, László; Török, Zsófia
2017-02-07
There are various liquid materials whose elemental composition is of interest in various fields of science and technology. In many cases, sample preparation or the extraction can be complicated, or it would destroy the original environment before the analysis (for example, in the case of biological samples). However, multielement direct analysis of liquid samples can be realized by an external PIXE-PIGE measurement system. Particle-induced X-ray and gamma-ray emission spectroscopy (PIXE, PIGE) techniques were applied in external (in-air) microbeam configuration for the trace and main element determination of liquid samples. The direct analysis of standard solutions of several metal salts and human blood samples (whole blood, blood serum, blood plasma, and formed elements) was realized. From the blood samples, Na, P, S, Cl, K, Ca, Fe, Cu, Zn, and Br elemental concentrations were determined. The focused and scanned ion beam creates an opportunity to analyze very small volume samples (∼10 μL). As the sample matrix consists of light elements, the analysis is possible at ppm level. Using this external beam setup, it was found that it is possible to determine elemental composition of small-volume liquid samples routinely, while the liquid samples do not require any preparation processes, and thus, they can be analyzed directly. In the case of lower concentrations, the method is also suitable for the analysis (down to even ∼1 ppm level) but with less accuracy and longer measurement times.
DEFF Research Database (Denmark)
Pöllänen, Roy; Virtanen, Sinikka; Kämäräinen, Meerit
In CAMNAR, an extensive interlaboratory exercise on the analytical methods used to determine several radionuclides present in the environmental samples was organized. Activity concentration of different natural radionuclides, such as Rn-222, Pb-210, Po-210, K-40, Ra-226, Ra-228 and isotopes...... of uranium, in addition to artificial Cs-137 and Am-241 were analysed from lake sediment samples and drinking water. The measurement techniques were gamma-ray spectrometry, alpha spectrometry, liquid scintillation counting and inductively coupled plasma mass spectrometry. Twenty six laboratories from nine...
Butt, N.; Pidlisecky, A.; Ganshorn, H.; Cockett, R.
2015-12-01
The software company 3 Point Science has developed three interactive learning programs designed to teach, test and practice visualization skills and geoscience concepts. A study was conducted with 21 geoscience students at the University of Calgary who participated in 2 hour sessions of software interaction and written pre and post-tests. Computer and SMART touch table interfaces were used to analyze user interaction, problem solving methods and visualization skills. By understanding and pinpointing user problem solving methods it is possible to reconstruct viewpoints and thought processes. This could allow us to give personalized feedback in real time, informing the user of problem solving tips and possible misconceptions.
Directory of Open Access Journals (Sweden)
2009-03-01
Full Text Available We define a special case for the vehicle routing problem with stochastic demands (SC-VRPSD where customer demands are normally distributed. We propose a new linear model for computing the expected length of a tour in SC-VRPSD. The proposed model is based on the integration of the “Traveling Salesman Problem” (TSP and the Assignment Problem. For large-scale problems, we also use an Iterated Local Search (ILS algorithm in order to reach an effective solution.
International Nuclear Information System (INIS)
Steen, F.H.
1997-01-01
This document is the final report for tank 241-AP-107 grab samples. Three grab samples were collected from riser 1 on September 11, 1997. Analyses were performed on samples 7AP-97-1, 7AP-97-2 and 7AP-97-3 in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) (Sasaki, 1997) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Rev. 1: Fowler, 1995; Rev. 2: Mulkey and Nuier, 1997). The analytical results are presented in the data summary report (Table 1). A notification was made to East Tank Farms Operations concerning low hydroxide in the tank and a hydroxide (caustic) demand analysis was requested. The request for sample analysis (RSA) (Attachment 2) received for AP-107 indicated that the samples were polychlorinated biphenyl (PCB) suspects. Therefore, prior to performing the requested analyses, aliquots were made to perform PCB analysis in accordance with the 222-S Laboratory administrative procedure, LAP-101-100. The results of this analysis indicated that no PCBs were present at 50 ppm and analysis proceeded as non-PCB samples. The results and raw data for the PCB analysis will be included in a revision to this document. The sample breakdown diagrams (Attachment 1) are provided as a cross-reference for relating the tank farm customer identification numbers with the 222-S Laboratory sample numbers and the portion of sample analyzed
DEFF Research Database (Denmark)
Andreasen, Sune Zoëga
the technology on a large scale from fulfilling its potential for maturing into applied technologies and products. In this work, we have taken the first steps towards realizing a capable and truly automated “sample-to-answer” analysis system, aimed at small molecule detection and quantification from a complex...... sample matrix. The main result is a working prototype of a microfluidic system, integrating both centrifugal microfluidics for sample handling, supported liquid membrane extraction (SLM) for selective and effective sample treatment, as well as in-situ electrochemical detection. As a case study...
King, Harley D.; Chaffee, Maurice A.
2000-01-01
INTRODUCTION In 1996-1998 the U.S. Geological Survey (USGS) conducted a geochemical study of the Bureau of Land Management's (BLM) 5.5 million-acre Northern and Eastern Colorado Desert Resource Area (usually referred to as the NECD in this report), Imperial, Riverside, and San Bernardino Counties, southeastern California (figure 1). This study was done in support of the BLM's Coordinated Management Plan for the area. This report presents analytical data from this study. To provide comprehensive coverage of the NECD, we compiled and examined all available geochemical data, in digital form, from previous studies in the area, and made sample-site plots to aid in determining where sample-site coverage and analyses were sufficient, which samples should be re-analyzed, and where additional sampling was needed. Previous investigations conducted in parts of the current study area included the National Uranium Resource Evaluation (NURE) program studies of the Needles and Salton Sea 1? x 2? quadrangles; USGS studies of 12 BLM Wilderness Study Areas (WSAs) (Big Maria Mountains, Chemehuevi Mountains, Chuckwalla Mountains, Coxcomb Mountains, Mecca Hills, Orocopia Mountains, Palen-McCoy, Picacho Peak, Riverside Mountains, Sheephole Valley (also known as Sheep Hole/Cadiz), Turtle Mountains, and Whipple Mountains); and USGS studies in the Needles and El Centro 1? x 2? quadrangles done during the early 1990s as part of a project to identify the regional geochemistry of southern California. Areas where we did new sampling of rocks and stream sediments are mainly in the Chocolate Mountain Aerial Gunnery Range and in Joshua Tree National Park, which extends into the west-central part of the NECD, as shown in figure 1 and figure 2. This report contains analytical data for 132 rock samples and 1,245 stream-sediment samples collected by the USGS, and 362 stream-sediment samples and 189 soil samples collected during the NURE program. All samples are from the Northern and Eastern Colorado
Energy Technology Data Exchange (ETDEWEB)
Basso Barichello, Liliane; Dias da Cunha, Rudnei [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Inst. de Matematica; Becker Picoloto, Camila [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Tres, Anderson [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Matematica Aplicada
2015-05-15
A nodal formulation of a fixed-source two-dimensional neutron transport problem, in Cartesian geometry, defined in a heterogeneous medium, is solved by an analytical approach. Explicit expressions, in terms of the spatial variables, are derived for averaged fluxes in each region in which the domain is subdivided. The procedure is an extension of an analytical discrete ordinates method, the ADO method, for the solution of the two-dimensional homogeneous medium case. The scheme is developed from the discrete ordinates version of the two-dimensional transport equation along with the level symmetric quadrature scheme. As usual for nodal schemes, relations between the averaged fluxes and the unknown angular fluxes at the contours are introduced as auxiliary equations. Numerical results are in agreement with results available in the literature.
Energy Technology Data Exchange (ETDEWEB)
Dupuis, M [Commissariat a l' Energie Atomique, 91 - Saclay (France). Centre d' Etudes Nucleaires, departement de physico-chimie, services des isotopes stables
1971-07-01
Two analytical representations of the Laplace transform of the time autocorrelation of a dynamical variable, namely the moment expansion and Mori's continued fraction expansion, are investigated from the point of view of structure and convergence properties, and the relation between them is established. The general theory is applied first to a dynamical model exactly solvable, the isotopic impurity in a linear chain of coupled harmonic oscillators, and then to two stochastic models recently introduced by Gordon for the rotational diffusion of molecules. In the latter case, the continued fraction expansion yields simple analytical expressions for the infrared absorption band shapes, showing that these models contain all the features of observed shapes in compressed gases, liquids and solutions. (author) [French] Deux representations analytiques de la transformee de Laplace de la fonction d'autocorrelation temporelle d'une variable dynamique, le developpement en moments et le developpement en fraction continue recemment introduit par Mori, sont etudiees du point de vue de leurs proprietes de structure et de convergence, en meme temps que la relation qui existe entre elles est etablie. La theorie generale est appliquee, d'une part, a un modele dynamique exactement soluble, celui d'une particule isotopique dans une chaine lineaire d'oscillateurs harmoniques couples, et, d'autre part, a deux modeles stochastiques recemment proposes par Gordon pour la diffusion rotationnelle des molecules. Dans ce dernier cas, la voie de la fraction continue fournit des expressions analytiques simples pour les formes de bande d'absorption infrarouge, montrant que ces modeles possedent les caracteristiques des formes observees dans les gaz comprimes, les liquides ou les solutions. (auteur)
Kinde, Tristan F; Lopez, Thomas D; Dutta, Debashis
2015-03-03
While the use of sodium dodecyl sulfate (SDS) in separation buffers allows efficient analysis of complex mixtures, its presence in the sample matrix is known to severely interfere with the mass-spectrometric characterization of analyte molecules. In this article, we report a microfluidic device that addresses this analytical challenge by enabling inline electrospray ionization mass spectrometry (ESI-MS) of low molecular weight cationic samples prepared in SDS containing matrices. The functionality of this device relies on the continuous extraction of analyte molecules into an SDS-free solvent stream based on the free-flow zone electrophoresis (FFZE) technique prior to their ESI-MS analysis. The reported extraction was accomplished in our current work in a glass channel with microelectrodes fabricated along its sidewalls to realize the desired electric field. Our experiments show that a key challenge to successfully operating such a device is to suppress the electroosmotically driven fluid circulations generated in its extraction channel that otherwise tend to vigorously mix the liquid streams flowing through this duct. A new coating medium, N-(2-triethoxysilylpropyl) formamide, recently demonstrated by our laboratory to nearly eliminate electroosmotic flow in glass microchannels was employed to address this issue. Applying this surface modifier, we were able to efficiently extract two different peptides, human angiotensin I and MRFA, individually from an SDS containing matrix using the FFZE method and detect them at concentrations down to 3.7 and 6.3 μg/mL, respectively, in samples containing as much as 10 mM SDS. Notice that in addition to greatly reducing the amount of SDS entering the MS instrument, the reported approach allows rapid solvent exchange for facilitating efficient analyte ionization desired in ESI-MS analysis.
DEFF Research Database (Denmark)
Casas, Monica Escolà; Hansen, Martin; Krogh, Kristine A
2014-01-01
the available sample preparation strategies combined with liquid chromatographic (LC) analysis to determine antimalarials in whole blood, plasma and urine published over the last decade. Sample preparation can be done by protein precipitation, solid-phase extraction, liquid-liquid extraction or dilution. After...
2010-10-01
... chromatograph. Detection limit: 0.04 ppm. Recommended air volume and sampling rate: 10 liter at 0.2 liter/min. 1... nitric acid. The benzene is converted to nitrobenzene. The carbon disulfide layer is removed, dried with...=molecular weight of benzene. 8. Backup data 8.1 Detection limit—Air Samples. The detection limit for the...
SOLUTION OF A MULTIVARIATE STRATIFIED SAMPLING PROBLEM THROUGH CHEBYSHEV GOAL PROGRAMMING
Directory of Open Access Journals (Sweden)
Mohd. Vaseem Ismail
2010-12-01
Full Text Available In this paper, we consider the problem of minimizing the variances for the various characters with fixed (given budget. Each convex objective function is first linearised at its minimal point where it meets the linear cost constraint. The resulting multiobjective linear programming problem is then solved by Chebyshev goal programming. A numerical example is given to illustrate the procedure.
An Investigation of Eighth Grade Students' Problem Posing Skills (Turkey Sample)
Arikan, Elif Esra; Ünal, Hasan
2015-01-01
To pose a problem refers to the creative activity for mathematics education. The purpose of the study was to explore the eighth grade students' problem posing ability. Three learning domains such as requiring four operations, fractions and geometry were chosen for this reason. There were two classes which were coded as class A and class B. Class A…
Espina, Virginia; Mueller, Claudius; Edmiston, Kirsten; Sciro, Manuela; Petricoin, Emanuel F; Liotta, Lance A
2009-08-01
Instability of tissue protein biomarkers is a critical issue for molecular profiling. Pre-analytical variables during tissue procurement, such as time delays during which the tissue remains stored at room temperature, can cause significant variability and bias in downstream molecular analysis. Living tissue, ex vivo, goes through a defined stage of reactive changes that begin with oxidative, hypoxic and metabolic stress, and culminate in apoptosis. Depending on the delay time ex vivo, and reactive stage, protein biomarkers, such as signal pathway phosphoproteins will be elevated or suppressed in a manner which does not represent the biomarker levels at the time of excision. Proteomic data documenting reactive tissue protein changes post collection indicate the need to recognize and address tissue stability, preservation of post-translational modifications, and preservation of morphologic features for molecular analysis. Based on the analysis of phosphoproteins, one of the most labile tissue protein biomarkers, we set forth tissue procurement guidelines for clinical research. We propose technical solutions for (i) assessing the state of protein analyte preservation and specimen quality via identification of a panel of natural proteins (surrogate stability markers), and (ii) using multi-purpose fixative solution designed to stabilize, preserve and maintain proteins, nucleic acids, and tissue architecture.
Directory of Open Access Journals (Sweden)
Richard J. Venedam
2005-02-01
Full Text Available The capabilities of a Ã¢Â€Âœuniversal platformÃ¢Â€Â for the deployment of analyticalsensors in the field for long-term monitoring of environmental contaminants were expandedin this investigation. The platform was previously used to monitor trichloroethene inmonitoring wells and at groundwater treatment systems (1,2. The platform was interfacedwith chromium (VI and conductivity analytical systems to monitor shallow wells installedadjacent to the Columbia River at the 100-D Area of the Hanford Site, Washington. Agroundwater plume of hexavalent chromium is discharging into the Columbia River throughthe gravels beds used by spawning salmon. The sampling/analytical platform was deployedfor the purpose of collecting data on subsurface hexavalent chromium concentrations atmore frequent intervals than was possible with the previous sampling and analysis methodsemployed a the Site.
Directory of Open Access Journals (Sweden)
Narong Wichapa
Full Text Available The selection of a suitable location for infectious waste disposal is one of the major problems in waste management. Determining the location of infectious waste disposal centers is a difficult and complex process because it requires combining social and environmental factors that are hard to interpret, and cost factors that require the allocation of resources. Additionally, it depends on several regulations. Based on the actual conditions of a case study, forty hospitals and three candidate municipalities in the sub-Northeast region of Thailand, we considered multiple factors such as infrastructure, geological and social & environmental factors, calculating global priority weights using the fuzzy analytical hierarchy process (FAHP. After that, a new multi-objective facility location problem model which combines FAHP and goal programming (GP, namely the FAHP-GP model, was tested. The proposed model can lead to selecting new suitable locations for infectious waste disposal by considering both total cost and final priority weight objectives. The novelty of the proposed model is the simultaneous combination of relevant factors that are difficult to interpret and cost factors, which require the allocation of resources. Keywords: Multi-objective facility location problem, Fuzzy analytic hierarchy process, Infectious waste disposal centers
International Nuclear Information System (INIS)
Semerok, A.; Dutouquet, C.
2014-01-01
Ultrashort pulse laser microablation coupled with optical emission spectroscopy was under study to obtain several micro-LIBS analytical features (shot-to-shot reproducibility, spectral line intensity and lifetime, calibration curves, detection limits). Laser microablation of Al matrix samples with known Cu- and Mg-concentrations was performed by single and double pulses of 50 fs and 1 ps pulse duration in air and with Ar-jet. The micro-LIBS analytical features obtained under different experimental conditions were characterized and compared. The highest shot-to-shot reproducibility and gain in plasma spectral line intensity were obtained with double pulses with Ar-jet for both 50 fs and 1 ps pulse durations. The best calibration curves were obtained with 1 ps pulse duration with Ar-jet. Micro-LIBS with ultrashort double pulses may find its effective application for surface elemental microcartography. - Highlights: • Analytical performances of micro-LIBS with ultrashort double pulses were studied. • The maximal line intensity gain of 20 was obtained with double pulses and Ar-jet. • LIBS gain was obtained without additional ablation of a sample by the second pulse. • LIBS properties were almost the same for both 50 fs and 1 ps pulses. • The micro-LIBS detection limit was around 35 ppm
Boonyasit, Yuwadee; Laiwattanapaisal, Wanida
2015-01-01
A method for acquiring albumin-corrected fructosamine values from whole blood using a microfluidic paper-based analytical system that offers substantial improvement over previous methods is proposed. The time required to quantify both serum albumin and fructosamine is shortened to 10 min with detection limits of 0.50 g dl(-1) and 0.58 mM, respectively (S/N = 3). The proposed system also exhibited good within-run and run-to-run reproducibility. The results of the interference study revealed that the acceptable recoveries ranged from 95.1 to 106.2%. The system was compared with currently used large-scale methods (n = 15), and the results demonstrated good agreement among the techniques. The microfluidic paper-based system has the potential to continuously monitor glycemic levels in low resource settings.
DEFF Research Database (Denmark)
Raposo, Francisco; Fernández-Cegrí, V.; De la Rubia, M.A.
of a general standard method and high quality certified reference materials (CRMs), currently the traceability of the COD determination in such samples is not easy to check. Proficiency testing (PT) is a powerful tool that can be used to test the performance that the participant’s laboratories can achieve. Two...
Zarzycki, Paweł K; Slączka, Magdalena M; Zarzycka, Magdalena B; Bartoszuk, Małgorzata A; Włodarczyk, Elżbieta; Baran, Michał J
2011-11-01
This paper is a continuation of our previous research focusing on development of micro-TLC methodology under temperature-controlled conditions. The main goal of present paper is to demonstrate separation and detection capability of micro-TLC technique involving simple analytical protocols without multi-steps sample pre-purification. One of the advantages of planar chromatography over its column counterpart is that each TLC run can be performed using non-previously used stationary phase. Therefore, it is possible to fractionate or separate complex samples characterized by heavy biological matrix loading. In present studies components of interest, mainly steroids, were isolated from biological samples like fish bile using single pre-treatment steps involving direct organic liquid extraction and/or deproteinization by freeze-drying method. Low-molecular mass compounds with polarity ranging from estetrol to progesterone derived from the environmental samples (lake water, untreated and treated sewage waters) were concentrated using optimized solid-phase extraction (SPE). Specific bands patterns for samples derived from surface water of the Middle Pomerania in northern part of Poland can be easily observed on obtained micro-TLC chromatograms. This approach can be useful as simple and non-expensive complementary method for fast control and screening of treated sewage water discharged by the municipal wastewater treatment plants. Moreover, our experimental results show the potential of micro-TLC as an efficient tool for retention measurements of a wide range of steroids under reversed-phase (RP) chromatographic conditions. These data can be used for further optimalization of SPE or HPLC systems working under RP conditions. Furthermore, we also demonstrated that micro-TLC based analytical approach can be applied as an effective method for the internal standard (IS) substance search. Generally, described methodology can be applied for fast fractionation or screening of the
Nkuba, Mabula; Hermenau, Katharin; Goessmann, Katharina; Hecker, Tobias
2018-04-12
Little is known about the prevalence of mental health problems among adolescents in Sub-Saharan Africa. Research consistently determined violence and maltreatment to be important risk factors. In this study, we examined the prevalence of mental health problems among adolescents in Tanzania, as well as the association with exposure to violence and maltreatment. We administered a set of questionnaires (e.g., strength and difficulties questionnaire; conflict tactic scale) to a nationally representative sample of 700 Tanzanian secondary school children (52% girls; age 14.92 years, SD = 1.02) and 333 parents or primary caregivers (53% females; age 43.47 years, SD = 9.02). 41% of the students reported an elevated level of mental health problems (emotional problems 40%, peer problems 63%, conduct problems 45%, hyperactivity 17%) in the past 6 months. Concordantly, 31% of parents reported observing an elevated level of mental health problems in their children (emotional problems 37%, peer problems 54%, conduct problems 35%, hyperactivity 17%). After controlling for other risk factors, we found significant associations between physical violence by parents and adolescent's mental health problems reported by students (β = 0.15) and their parents (β = 0.33). Our findings suggest a high prevalence of mental health problems using screening tools among secondary school students in Tanzania as well as an association between physical violence by parents and adolescents' mental health problems. Our findings emphasize the need to inform the population at large about the potentially adverse consequences associated with violence against children and adolescents.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Dziadosz, Marek
2018-05-01
Multiple analyte adduct formation was examined and discussed in the context of reproducible signal detection in liquid chromatography-tandem mass spectrometry applied in the analysis of biologically-related samples. Appropriate infusion solutions were prepared in H 2 O/methanol (3/97, v/v) with 1 mM sodium acetate and 10 mM acetic acid. An API 4000 QTrap tandem mass spectrometer was used for experiments performed in the negative scan mode (-Q1 MS) and the negative enhanced product ion mode (-EPI). γ‑Hydroxybutyrate and its deuterated form were used as model compounds to highlight both the complexity of adduct formation in popular mobile phases used and the effective signal compensation by the application of isotope-labelled analytes as internal standards. Copyright © 2018 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Yin Chen; Xu Mingyu
2009-01-01
We set up a one-dimensional mathematical model with a Caputo fractional operator of a drug released from a polymeric matrix that can be dissolved into a solvent. A two moving boundaries problem in fractional anomalous diffusion (in time) with order α element of (0, 1] under the assumption that the dissolving boundary can be dissolved slowly is presented in this paper. The two-parameter regular perturbation technique and Fourier and Laplace transform methods are used. A dimensionless asymptotic analytical solution is given in terms of the Wright function
Measurement of highly active samples of ultrashort-lived radionuclides and its problems
International Nuclear Information System (INIS)
van der Baan, J.G.; Panek, K.J.
1985-01-01
The measurement of highly active eluates obtained from the generators for ultrashort-lived radionuclides poses several problems which are briefly discussed by using the example of the /sup 195m/Hg→/sup 195m/Au generator. For overcoming some of the problems, the construction of a multiple single-channel analyzer that allows high count rates, is described, as well as the counting technique applicable for highly active eluates
Halyo, N.; Caglayan, A. K.
1976-01-01
This paper considers the control of a continuous linear plant disturbed by white plant noise when the control is constrained to be a piecewise constant function of time; i.e. a stochastic sampled-data system. The cost function is the integral of quadratic error terms in the state and control, thus penalizing errors at every instant of time while the plant noise disturbs the system continuously. The problem is solved by reducing the constrained continuous problem to an unconstrained discrete one. It is shown that the separation principle for estimation and control still holds for this problem when the plant disturbance and measurement noise are Gaussian.
Quilty, Lena C; Avila Murati, Daniela; Bagby, R Michael
2014-03-01
Many gamblers would prefer to reduce gambling on their own rather than to adopt an abstinence approach within the context of a gambling treatment program. Yet responsible gambling guidelines lack quantifiable markers to guide gamblers in wagering safely. To address these issues, the current investigation implemented receiver operating characteristic (ROC) analysis to identify behavioral indicators of harmful and problem gambling. Gambling involvement was assessed in 503 participants (275 psychiatric outpatients and 228 community gamblers) with the Canadian Problem Gambling Index. Overall gambling frequency, duration, and expenditure were able to distinguish harmful and problematic gambling at a moderate level. Indicators of harmful gambling were generated for engagement in specific gambling activities: frequency of tickets and casino; duration of bingo, casino, and investments; and expenditures on bingo, casino, sports betting, games of skill, and investments. Indicators of problem gambling were similarly produced for frequency of tickets and casino, and expenditures on bingo, casino, games of skill, and investments. Logistic regression analyses revealed that overall gambling frequency uniquely predicted the presence of harmful and problem gambling. Furthermore, frequency indicators for tickets and casino uniquely predicted the presence of both harmful and problem gambling. Together, these findings contribute to the development of an empirically based method enabling the minimization of harmful or problem gambling through self-control rather than abstinence.
Ghoneim, Nahed Mohamed Mahmoud
2013-01-01
The current study focused on the problems which students encounter while listening to the English language, the mental processes they activate in listening comprehension, and the strategies they use in different phases of comprehension. Also, it aimed to find out whether there were any differences between advanced and intermediate students in…