Model-based estimation of finite population total in stratified sampling
African Journals Online (AJOL)
The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...
Multilevel systematic sampling to estimate total fruit number for yield forecasts
DEFF Research Database (Denmark)
Wulfsohn, Dvora-Laio; Zamora, Felipe Aravena; Tellez, Camilla Potin
2012-01-01
procedure for unbiased estimation of fruit number for yield forecasts. In the Spring of 2009 we estimated the total number of fruit in several rows of each of 14 commercial fruit orchards growing apple (11 groves), kiwifruit (two groves), and table grapes (one grove) in central Chile. Survey times were 10...
Directory of Open Access Journals (Sweden)
Mohammad Ali Sheikh-Mohseni
2016-12-01
Full Text Available A modified electrode was prepared by modification of the carbon paste electrode (CPE with graphene nano-sheets. The fabricated modified electrode exhibited an electrocatalytic activity toward gallic acid (GA oxidation because of good conductivity, low electron transfer resistance and catalytic effect. The graphene modified CPE had a lower overvoltage and enhanced electrical current respect to the bare CPE for the oxidation of GA. The oxidation potential of GA decreased more than 210 mV by the modified electrode. The modified electrode responded to the GA in the concentration range of 3.0 × 10-5-1.5 × 10-4 M with high sensitivity by the technique of differential pulse voltammetry. Also, detection limit of 1.1 × 10-7 M was obtained by this modified electrode for GA. This electrode was used for the successful determination of GA in plant samples. Therefore, the content of total polyphenols in plant samples can be determined by the proposed modified electrode based on the concentration of GA in the sample.
International Nuclear Information System (INIS)
Simakov, V.A.; Kordyukov, S.V.; Petrov, E.N.
1988-01-01
Method of background estimation in short-wave spectral region during determination of total sample composition by X-ray fluorescence method is described. 13 types of different rocks with considerable variations of base composition and Zr, Nb, Th, U content below 7x10 -3 % are investigated. The suggested method of background accounting provides for a less statistical error of the background estimation than direct isolated measurement and reliability of its determination in a short-wave region independent on the sample base. Possibilities of suggested method for artificial mixtures conforming by the content of main component to technological concemtrates - niobium, zirconium, tantalum are estimated
Sigrist, Mirna; Hilbe, Nandi; Brusa, Lucila; Campagnoli, Darío; Beldoménico, Horacio
2016-11-01
An optimized flow injection hydride generation atomic absorption spectroscopy (FI-HGAAS) method was used to determine total arsenic in selected food samples (beef, chicken, fish, milk, cheese, egg, rice, rice-based products, wheat flour, corn flour, oats, breakfast cereals, legumes and potatoes) and to estimate their contributions to inorganic arsenic dietary intake. The limit of detection (LOD) and limit of quantification (LOQ) values obtained were 6μgkg(-)(1) and 18μgkg(-)(1), respectively. The mean recovery range obtained for all food at a fortification level of 200μgkg(-)(1) was 85-110%. Accuracy was evaluated using dogfish liver certified reference material (DOLT-3 NRC) for trace metals. The highest total arsenic concentrations (in μgkg(-)(1)) were found in fish (152-439), rice (87-316) and rice-based products (52-201). The contribution to inorganic arsenic (i-As) intake was calculated from the mean i-As content of each food (calculated by applying conversion factors to total arsenic data) and the mean consumption per day. The primary contributors to inorganic arsenic intake were wheat flour, including its proportion in wheat flour-based products (breads, pasta and cookies), followed by rice; both foods account for close to 53% and 17% of the intake, respectively. The i-As dietary intake, estimated as 10.7μgday(-)(1), was significantly lower than that from drinking water in vast regions of Argentina. Copyright © 2016 Elsevier Ltd. All rights reserved.
Particle size distributions (PSD) have long been used to more accurately estimate the PM10 fraction of total particulate matter (PM) stack samples taken from agricultural sources. These PSD analyses were typically conducted using a Coulter Counter with 50 micrometer aperture tube. With recent increa...
Cao, X.-L.; Perez-Locas, C.; Dufresne, G.; Clement, G.; Popovic, S.; Beraldin, F.; Dabeka, R.W.; Feeley, M.
2011-01-01
A total of 154 food composite samples from the 2008 total diet study in Quebec City were analysed for bisphenol A (BPA), and BPA was detected in less than half (36%, or 55 samples) of the samples tested. High concentrations of BPA were found mostly in the composite samples containing canned foods, with the highest BPA level being observed in canned fish (106 ng g−1), followed by canned corn (83.7 ng g−1), canned soups (22.2–44.4 ng g−1), canned baked beans (23.5 ng g−1), canned peas (16.8 ng ...
Sampling and estimating recreational use.
Timothy G. Gregoire; Gregory J. Buhyoff
1999-01-01
Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.
Cao, X-L; Perez-Locas, C; Dufresne, G; Clement, G; Popovic, S; Beraldin, F; Dabeka, R W; Feeley, M
2011-06-01
A total of 154 food composite samples from the 2008 total diet study in Quebec City were analysed for bisphenol A (BPA), and BPA was detected in less than half (36%, or 55 samples) of the samples tested. High concentrations of BPA were found mostly in the composite samples containing canned foods, with the highest BPA level being observed in canned fish (106 ng g(-1)), followed by canned corn (83.7 ng g(-1)), canned soups (22.2-44.4 ng g(-1)), canned baked beans (23.5 ng g(-1)), canned peas (16.8 ng g(-1)), canned evaporated milk (15.3 ng g(-1)), and canned luncheon meats (10.5 ng g(-1)). BPA levels in baby food composite samples were low, with 2.75 ng g(-1) in canned liquid infant formula, and 0.84-2.46 ng g(-1) in jarred baby foods. BPA was also detected in some foods that are not canned or in jars, such as yeast (8.52 ng g(-1)), baking powder (0.64 ng g(-1)), some cheeses (0.68-2.24 ng g(-1)), breads and some cereals (0.40-1.73 ng g(-1)), and fast foods (1.1-10.9 ng g(-1)). Dietary intakes of BPA were low for all age-sex groups, with 0.17-0.33 µg kg(-1) body weight day(-1) for infants, 0.082-0.23 µg kg(-1) body weight day(-1) for children aged from 1 to 19 years, and 0.052-0.081 µg kg(-1) body weight day(-1) for adults, well below the established regulatory limits. BPA intakes from 19 of the 55 samples account for more than 95% of the total dietary intakes, and most of the 19 samples were either canned or in jars. Intakes of BPA from non-canned foods are low.
Procedure for estimating permanent total enclosure costs
Energy Technology Data Exchange (ETDEWEB)
Lukey, M E; Prasad, C; Toothman, D A; Kaplan, N
1999-07-01
Industries that use add-on control devices must adequately capture emissions before delivering them to the control device. One way to capture emissions is to use permanent total enclosures (PTEs). By definition, an enclosure which meets the US Environmental Protection Agency's five-point criteria is a PTE and has a capture efficiency of 100%. Since costs play an important role in regulatory development, in selection of control equipment, and in control technology evaluations for permitting purposes, EPA has developed a Control Cost Manual for estimating costs of various items of control equipment. EPA's Manual does not contain any methodology for estimating PTE costs. In order to assist environmental regulators and potential users of PTEs, a methodology for estimating PTE costs was developed under contract with EPA, by Pacific Environmental Services, Inc. (PES) and is the subject of this paper. The methodology for estimating PTE costs follows the approach used for other control devices in the Manual. It includes procedures for sizing various components of a PTE and for estimating capital as well as annual costs. It contains verification procedures for demonstrating compliance with EPA's five-point criteria. In addition, procedures are included to determine compliance with Occupational Safety and Health Administration (OSHA) standards. Meeting these standards is an important factor in properly designing PTEs. The methodology is encoded in Microsoft Exel spreadsheets to facilitate cost estimation and PTE verification. Examples are given throughout the methodology development and in the spreadsheets to illustrate the PTE design, verification, and cost estimation procedures.
Procedure for estimating permanent total enclosure costs
Energy Technology Data Exchange (ETDEWEB)
Lukey, M.E.; Prasad, C.; Toothman, D.A.; Kaplan, N.
1999-07-01
Industries that use add-on control devices must adequately capture emissions before delivering them to the control device. One way to capture emissions is to use permanent total enclosures (PTEs). By definition, an enclosure which meets the US Environmental Protection Agency's five-point criteria is a PTE and has a capture efficiency of 100%. Since costs play an important role in regulatory development, in selection of control equipment, and in control technology evaluations for permitting purposes, EPA has developed a Control Cost Manual for estimating costs of various items of control equipment. EPA's Manual does not contain any methodology for estimating PTE costs. In order to assist environmental regulators and potential users of PTEs, a methodology for estimating PTE costs was developed under contract with EPA, by Pacific Environmental Services, Inc. (PES) and is the subject of this paper. The methodology for estimating PTE costs follows the approach used for other control devices in the Manual. It includes procedures for sizing various components of a PTE and for estimating capital as well as annual costs. It contains verification procedures for demonstrating compliance with EPA's five-point criteria. In addition, procedures are included to determine compliance with Occupational Safety and Health Administration (OSHA) standards. Meeting these standards is an important factor in properly designing PTEs. The methodology is encoded in Microsoft Exel spreadsheets to facilitate cost estimation and PTE verification. Examples are given throughout the methodology development and in the spreadsheets to illustrate the PTE design, verification, and cost estimation procedures.
Graph Sampling for Covariance Estimation
Chepuri, Sundeep Prabhakar
2017-04-25
In this paper the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the non-parametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed theory.
Estimation of Total Error in DWPF Reported Radionuclide Inventories
International Nuclear Information System (INIS)
Edwards, T.B.
1995-01-01
This report investigates the impact of random errors due to measurement and sampling on the reported concentrations of radionuclides in DWPF's filled canister inventory resulting from each macro-batch. The objective of this investigation is to estimate the variance of the total error in reporting these radionuclide concentrations
Al Haddabi, Buthaina; Al Lawati, Haider A J; Suliman, FakhrEldin O
2016-04-01
Two chemiluminescence-microfluidic (CL-MF) systems, e.g., Ce(IV)-rhodamine B (RB) and Ce(IV)-rhodamine 6G (R6G), for the determination of the total phenolic content in teas and some sweeteners were evaluated. The results indicated that the Ce(IV)-R6G system was more sensitive in comparison to the Ce(IV)-RB CL system. Therefore, a simple (CL-MF) method based on the CL of Ce(IV)-R6G was developed, and the sensitivity, selectivity and stability of this system were evaluated. Selected phenolic compounds (PCs), such as quercetin (QRC), catechin (CAT), rutin (RUT), gallic acid (GA), caffeic acid (CA) and syringic acid (SA), produced analytically useful chemiluminescence signals with low detection limits ranging from 0.35 nmol L(-1) for QRC to 11.31 nmol L(-1) for SA. The mixing sequence and the chip design were crucial, as the sensitivity and reproducibility could be substantially affected by these two factors. In addition, the anionic surfactant (i.e., sodium dodecyl sulfate (SDS)) can significantly enhance the CL signal intensity by as much as 300% for the QRC solution. Spectroscopic studies indicated that the enhancement was due to a strong guest-host interaction between the cationic R6G molecules and the anionic amphiphilic environment. Other parameters that could affect the CL intensities of the PCs were carefully optimized. Finally, the method was successfully applied to tea and sweetener samples. Six different tea samples exhibited total phenolic/antioxidant levels from 7.32 to 13.5 g per 100g of sample with respect to GA. Four different sweetener samples were also analyzed and exhibited total phenolic/antioxidant levels from 500.9 to 3422.9 mg kg(-1) with respect to GA. The method was selective, rapid and sensitive when used to estimate the total phenolic/antioxidant level, and the results were in good agreement with those reported for honey and tea samples. Copyright © 2015 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Caldwell, J.T.; Cates, M.R.; Franks, L.A.; Kunz, W.E.
1985-01-01
Simultaneous photon and neutron interrogation of samples for the quantitative determination of total fissile nuclide and total fertile nuclide material present is made possible by the use of an electron accelerator. Prompt and delayed neutrons produced from resulting induced fissions are counted using a single detection system and allow the resolution of the contributions from each interrogating flux leading in turn to the quantitative determination sought. Detection limits for 239 Pu are estimated to be about 3 mg using prompt fission neutrons and about 6 mg using delayed delayed neutrons
Population: Census Bureau Total Estimates (2010-2012)
Earth Data Analysis Center, University of New Mexico — Total population estimates are estimates of the total number of residents living in an area on July 1 of each year. The Census Bureaus Population Division produces...
Estimation of population mean under systematic sampling
Noor-ul-amin, Muhammad; Javaid, Amjad
2017-11-01
In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.
Power Spectrum Estimation of Randomly Sampled Signals
DEFF Research Database (Denmark)
Velte, C. M.; Buchhave, P.; K. George, W.
algorithms; sample and-hold and the direct spectral estimator without residence time weighting. The computer generated signal is a Poisson process with a sample rate proportional to velocity magnitude that consist of well-defined frequency content, which makes bias easy to spot. The idea...
Plasticizers in total diet samples, baby food and infant formulae
DEFF Research Database (Denmark)
Petersen, Jens Højslev; Breindahl, T.
2000-01-01
The plasticizers di-n-butylphthalate (DBP), butylbenzylphthalate (BBP), di-2-(ethylhexyl)phthalate (DEHP) and di-2-(ethylhexyl)adipate (DEHA) were analysed in 29 total diet samples, in 11 samples of baby food and in 11 samples of infant formulae. In all of the total diet samples the presence of one...... as in infant formulae. The calculated mean maximum intakes of the individual compounds from the total diet samples were below 10% of the restrictions proposed by the EU Scientific Committee for Food (SCF), and the spread in individual intakes was considerable. DEHP was the plasticizer determined most...
False high level in total bilirubin estimation in nonicteric serum
African Journals Online (AJOL)
estimation of total bilirubin by DiaSys and Randox reagents along with simultaneous re-estimation by Roche reagents in ... been used mainly due to slightly lower cost in ... MATERIALS AND METHODS ... air-conditioned laboratory overnight. ..... Elevated IgG causing spurious elevation in serum total bilirubin assay. Asia.
Iterative importance sampling algorithms for parameter estimation
Morzfeld, Matthias; Day, Marcus S.; Grout, Ray W.; Pau, George Shu Heng; Finsterle, Stefan A.; Bell, John B.
2016-01-01
In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov Chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is ...
TOTAL INFRARED LUMINOSITY ESTIMATION OF RESOLVED AND UNRESOLVED GALAXIES
International Nuclear Information System (INIS)
Boquien, M.; Calzetti, D.; Bendo, G.; Dale, D.; Engelbracht, C.; Kennicutt, R.; Lee, J. C.; Van Zee, L.; Moustakas, J.
2010-01-01
The total infrared (TIR) luminosity from galaxies can be used to examine both star formation and dust physics. We provide here new relations to estimate the TIR luminosity from various Spitzer bands, in particular from the 8 μm and 24 μm bands. To do so, we use data for 45'' subregions within a subsample of nearby face-on spiral galaxies from the Spitzer Infrared Nearby Galaxies Survey (SINGS) that have known oxygen abundances as well as integrated galaxy data from the SINGS, the Local Volume Legacy survey (LVL), and Engelbracht et al. samples. Taking into account the oxygen abundances of the subregions, the star formation rate intensity, and the relative emission of the polycyclic aromatic hydrocarbons at 8 μm, the warm dust at 24 μm, and the cold dust at 70 μm and 160 μm, we derive new relations to estimate the TIR luminosity from just one or two of the Spitzer bands. We also show that the metallicity and the star formation intensity must be taken into account when estimating the TIR luminosity from two wave bands, especially when data longward of 24 μm are not available.
Estimation of Total Tree Height from Renewable Resources Evaluation Data
Charles E. Thomas
1981-01-01
Many ecological, biological, and genetic studies use the measurement of total tree height. Until recently, the Southern Forest Experiment Station's inventory procedures through Renewable Resources Evaluation (RRE) have not included total height measurements. This note provides equations to estimate total height based on other RRE measurements.
Total and inorganic arsenic in fish samples from Norwegian waters
DEFF Research Database (Denmark)
Julshamn, K.; Nilsen, B. M.; Frantzen, S.
2012-01-01
The contents of total arsenic and inorganic arsenic were determined in fillet samples of Northeast Arctic cod, herring, mackerel, Greenland halibut, tusk, saithe and Atlantic halibut. In total, 923 individual fish samples were analysed. The fish were mostly caught in the open sea off the coast......-assisted dissolution of the samples. The concentrations found for total arsenic varied greatly between fish species, and ranged from 0.3 to 110 mg kg–1 wet weight. For inorganic arsenic, the concentrations found were very low (...
Methods of multicriterion estimations in system total quality management
Directory of Open Access Journals (Sweden)
Nikolay V. Diligenskiy
2011-05-01
Full Text Available In this article the method of multicriterion comparative estimation of efficiency (Data Envelopment Analysis and possibility of its application in system of total quality management is considered.
PFP total operating efficiency calculation and basis of estimate
International Nuclear Information System (INIS)
SINCLAIR, J.C.
1999-01-01
The purpose of the Plutonium Finishing Plant (PFP) Total Operating Efficiency Calculation and Basis of Estimate document is to provide the calculated value and basis of estimate for the Total Operating Efficiency (TOE) for the material stabilization operations to be conducted in 234-52 Building. This information will be used to support both the planning and execution of the Plutonium Finishing Plant (PFP) Stabilization and Deactivation Project's (hereafter called the Project) resource-loaded, integrated schedule
Sampling point selection for energy estimation in the quasicontinuum method
Beex, L.A.A.; Peerlings, R.H.J.; Geers, M.G.D.
2010-01-01
The quasicontinuum (QC) method reduces computational costs of atomistic calculations by using interpolation between a small number of so-called repatoms to represent the displacements of the complete lattice and by selecting a small number of sampling atoms to estimate the total potential energy of
Total and inorganic arsenic in fish samples from Norwegian waters.
Julshamn, Kaare; Nilsen, Bente M; Frantzen, Sylvia; Valdersnes, Stig; Maage, Amund; Nedreaas, Kjell; Sloth, Jens J
2012-01-01
The contents of total arsenic and inorganic arsenic were determined in fillet samples of Northeast Artic cod, herring, mackerel, Greenland halibut, tusk, saithe and Atlantic halibut. In total, 923 individual fish samples were analysed. The fish were mostly caught in the open sea off the coast of Norway, from 40 positions. The determination of total arsenic was carried out by inductively coupled plasma mass spectrometry following microwave-assisted wet digestion. The determination of inorganic arsenic was carried out by high-performance liquid chromatography-ICP-MS following microwave-assisted dissolution of the samples. The concentrations found for total arsenic varied greatly between fish species, and ranged from 0.3 to 110 mg kg(-1) wet weight. For inorganic arsenic, the concentrations found were very low (fish used in the recent EFSA opinion on arsenic in food.
Estimating Sample Size for Usability Testing
Directory of Open Access Journals (Sweden)
Alex Cazañas
2017-02-01
Full Text Available One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection.
Determination of total organic phosphorus in samples of mineral soils
Directory of Open Access Journals (Sweden)
Armi Kaila
1962-01-01
Full Text Available In this paper some observations on the estimation of organic phosphorus in mineral soils are reported. The fact is emphasized that the accuracy of all the methods available is relatively poor. Usually, there are no reasons to pay attention to differences less than about 20 ppm. of organic P. Analyses performed on 345 samples of Finnish mineral soils by the extraction method of MEHTA et. al. (10 and by a simple procedure adopted by the author (successive extractions with 4 N H2SO4 and 0.5 N NaOH at room temperature in the ratio of 1 to 100 gave, on the average, equal results. It seemed to be likely that the MEHTA method removed the organic phosphorus more completely than did the less vigorous method, but in the former the partial hydrolysis of organic phosphorus compounds tends to be higher than in the latter. An attempt was made to find out whether the differences between the respective values for organic phosphorus obtained by an ignition method and the simple extraction method could be connected with any characteristics of the soil. No correlation or only a low correlation coefficient could be calculated between the difference in the results of these two methods and e. g. the pH-value, the content of clay, organic carbon, aluminium and iron soluble in Tamm’s acid oxalate, the indicator of the phosphate sorption capacity, or the »Fe-bound» inorganic phosphorus, respectively. The absolute difference tended to increase with an increase in the content of organic phosphorus. For the 250 samples of surface soils analyzed, the ignition method gave values which were, on the average, about 50 ppm. higher than the results obtained by the extraction procedure. The corresponding difference for the 120 samples from deeper layers was about 20 ppm of organic P. The author recommends, for the present, the determination of the total soil organic phosphorus as an average of the results obtained by the ignition method and the extraction method.
Evaluation of sampling strategies to estimate crown biomass
Directory of Open Access Journals (Sweden)
Krishna P Poudel
2015-01-01
Full Text Available Background Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire modeling. However, crown biomass is difficult to predict because of the variability within and among species and sites. Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies. In this study, we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass. Methods Using data collected from 20 destructively sampled trees, we evaluated 11 different sampling strategies using six evaluation statistics: bias, relative bias, root mean square error (RMSE, relative RMSE, amount of biomass sampled, and relative biomass sampled. We also evaluated the performance of the selected sampling strategies when different numbers of branches (3, 6, 9, and 12 are selected from each tree. Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass. Results Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled. However, the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled. Under the stratified sampling strategy, selecting unequal number of branches per stratum produced approximately similar results to simple random sampling, but it further decreased RMSE when information on branch diameter is used in the design and estimation phases. Conclusions Use of
AAS determination of total mercury content in environmental samples
International Nuclear Information System (INIS)
Moskalova, M.; Zemberyova, M.
1997-01-01
Two methods for determination of total mercury content in environmental samples soils, and sediments, were compared. Dissolution procedure of soils, sediments, and biological material under elevated pressure followed by determination of mercury by cold vapour atomic absorption spectrometry using a MHS-1 system and direct total mercury determination without any chemical pretreatment from soil samples using a Trace Mercury Analyzer TMA-254 were compared. TMA-254 was also applied for the determination of mercury in various further standard reference materials. Good agreement with certified values of environmental reference materials was obtained. (authors)
Uncertainty Model for Total Solar Irradiance Estimation on Australian Rooftops
Al-Saadi, Hassan; Zivanovic, Rastko; Al-Sarawi, Said
2017-11-01
The installations of solar panels on Australian rooftops have been in rise for the last few years, especially in the urban areas. This motivates academic researchers, distribution network operators and engineers to accurately address the level of uncertainty resulting from grid-connected solar panels. The main source of uncertainty is the intermittent nature of radiation, therefore, this paper presents a new model to estimate the total radiation incident on a tilted solar panel. Where a probability distribution factorizes clearness index, the model is driven upon clearness index with special attention being paid for Australia with the utilization of best-fit-correlation for diffuse fraction. The assessment of the model validity is achieved with the adoption of four goodness-of-fit techniques. In addition, the Quasi Monte Carlo and sparse grid methods are used as sampling and uncertainty computation tools, respectively. High resolution data resolution of solar irradiations for Adelaide city were used for this assessment, with an outcome indicating a satisfactory agreement between actual data variation and model.
PFP total process throughput calculation and basis of estimate
International Nuclear Information System (INIS)
SINCLAIR, J.C.
1999-01-01
The PFP Process Throughput Calculation and Basis of Estimate document provides the calculated value and basis of estimate for process throughput associated with material stabilization operations conducted in 234-52 Building. The process throughput data provided reflects the best estimates of material processing rates consistent with experience at the Plutonium Finishing Plant (PFP) and other U.S. Department of Energy (DOE) sites. The rates shown reflect demonstrated capacity during ''full'' operation. They do not reflect impacts of building down time. Therefore, these throughput rates need to have a Total Operating Efficiency (TOE) factor applied
Determination of nitrite, nitrate and total nitrogen in vegetable samples
Directory of Open Access Journals (Sweden)
Manas Kanti Deb
2007-04-01
Full Text Available Yellow diazonium cation formed by reaction of nitrite with 6-amino-1-naphthol-3-sulphonic acid is coupled with β-naphthol in strong alkaline medium to yield a pink coloured azo dye. The azo-dyes shows absorption maximum at 510 nm with molar absorptivity of 2.5 ×104 M-1 cm-1. The dye product obeys Beer's law (correlation coefficient = 0.997, in terms of nitrite concentration, up to 2.7 μg NO2 mL-1. The above colour reaction system has been applied successfully for the determination of nitrite, nitrate and total nitrogen in vegetable samples. Unreduced samples give direct measure for nitrite whilst reduction of samples by copperized-cadmium column gives total nitrogen content and their difference shows nitrate content in the samples. Variety of vegetables have been tested for their N-content (NO2-/NO3-/total-N with % RSD ranging between 1.5 to 2.5 % for nitrite determination. The effects of foreign ions in the determination of the nitrite, nitrate, and total nitrogen have been studied. Statistical comparison of the results with those of reported method shows good agreement and indicates no significant difference in precision.
Total Decomposition of Environmental Radionuclide Samples with a Microwave Oven
International Nuclear Information System (INIS)
Ramon Garcia, Bernd Kahn
1998-01-01
Closed-vessel microwave assisted acid decomposition was investigated as an alternative to traditional methods of sample dissolution/decomposition. This technique, used in analytical chemistry, has some potential advantages over other procedures. It requires less reagents, it is faster, and it has the potential of achieving total dissolution because of higher temperatures and pressures
Differences in sampling techniques on total post-mortem tryptase.
Tse, R; Garland, J; Kesha, K; Elstub, H; Cala, A D; Ahn, Y; Stables, S; Palmiere, C
2017-11-20
The measurement of mast cell tryptase is commonly used to support the diagnosis of anaphylaxis. In the post-mortem setting, the literature recommends sampling from peripheral blood sources (femoral blood) but does not specify the exact sampling technique. Sampling techniques vary between pathologists, and it is unclear whether different sampling techniques have any impact on post-mortem tryptase levels. The aim of this study is to compare the difference in femoral total post-mortem tryptase levels between two sampling techniques. A 6-month retrospective study comparing femoral total post-mortem tryptase levels between (1) aspirating femoral vessels with a needle and syringe prior to evisceration and (2) femoral vein cut down during evisceration. Twenty cases were identified, with three cases excluded from analysis. There was a statistically significant difference (paired t test, p sampling methods. The clinical significance of this finding and what factors may contribute to it are unclear. When requesting post-mortem tryptase, the pathologist should consider documenting the exact blood collection site and method used for collection. In addition, blood samples acquired by different techniques should not be mixed together and should be analyzed separately if possible.
Estimating Soil Bulk Density and Total Nitrogen from Catchment ...
African Journals Online (AJOL)
Even though data on soil bulk density (BD) and total nitrogen (TN) are essential for planning modern farming techniques, their data availability is limited for many applications in the developing word. This study is designed to estimate BD and TN from soil properties, land-use systems, soil types and landforms in the ...
Precise estimation of total solar radiation on tilted surface
African Journals Online (AJOL)
rajeev
rarely available required for precise sizing of energy systems. The total solar radiation at different orientation and slope is needed to calculate the efficiency of the installed solar energy systems. To calculate clearness index (Kt) used by Gueymard (2000) for estimating solar irradiation H, irradiation at the earth's surface has ...
Sample size estimation and sampling techniques for selecting a representative sample
Directory of Open Access Journals (Sweden)
Aamir Omair
2014-01-01
Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.
McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.
2012-01-01
Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.
An Estimate of the Total DNA in the Biosphere.
Landenmark, Hanna K E; Forgan, Duncan H; Cockell, Charles S
2015-06-01
Modern whole-organism genome analysis, in combination with biomass estimates, allows us to estimate a lower bound on the total information content in the biosphere: 5.3 × 1031 (±3.6 × 1031) megabases (Mb) of DNA. Given conservative estimates regarding DNA transcription rates, this information content suggests biosphere processing speeds exceeding yottaNOPS values (1024 Nucleotide Operations Per Second). Although prokaryotes evolved at least 3 billion years before plants and animals, we find that the information content of prokaryotes is similar to plants and animals at the present day. This information-based approach offers a new way to quantify anthropogenic and natural processes in the biosphere and its information diversity over time.
Estimation of Total Body Fat from Potassium-40 Content
International Nuclear Information System (INIS)
Taha Mohamed Taha Ahmed, T.M.T.
2010-01-01
This paper concerns on estimation of total body fat from potassium 40 content using total body counting technique. The work performed using fast scan whole body counter. Calibration of that system for K-40 was carried out under assumption that uniformity distribution of radioactivity of potassium was distributed in 10 polyethylene bottles phantom. Different body sizes were represented by 2, 4, 6, 8 and 10 polyethylene bottles; each bottle has a volume of 0.04 m3. The counting efficiency for each body size was determined. Lean body weight (LBW) was calculated for ten males and ten females using appropriate mathematical equation. Total Body Potassium, TBK for the same selected group was measured using whole body counter. A mathematical relationship between lean body weight and potassium content was deduced .Fat contents for some individuals were calculated and weight/height ratio was indicated for fatness.
On efficiency of some ratio estimators in double sampling design ...
African Journals Online (AJOL)
In this paper, three sampling ratio estimators in double sampling design were proposed with the intention of finding an alternative double sampling design estimator to the conventional ratio estimator in double sampling design discussed by Cochran (1997), Okafor (2002) , Raj (1972) and Raj and Chandhok (1999).
Design-based estimators for snowball sampling
Shafie, Termeh
2010-01-01
Snowball sampling, where existing study subjects recruit further subjects from amongtheir acquaintances, is a popular approach when sampling from hidden populations.Since people with many in-links are more likely to be selected, there will be a selectionbias in the samples obtained. In order to eliminate this bias, the sample data must beweighted. However, the exact selection probabilities are unknown for snowball samplesand need to be approximated in an appropriate way. This paper proposes d...
Sampling, feasibility, and priors in Bayesian estimation
Chorin, Alexandre J.; Lu, Fei; Miller, Robert N.; Morzfeld, Matthias; Tu, Xuemin
2015-01-01
Importance sampling algorithms are discussed in detail, with an emphasis on implicit sampling, and applied to data assimilation via particle filters. Implicit sampling makes it possible to use the data to find high-probability samples at relatively low cost, making the assimilation more efficient. A new analysis of the feasibility of data assimilation is presented, showing in detail why feasibility depends on the Frobenius norm of the covariance matrix of the noise and not on the number of va...
Estimating population salt intake in India using spot urine samples.
Petersen, Kristina S; Johnson, Claire; Mohan, Sailesh; Rogers, Kris; Shivashankar, Roopa; Thout, Sudhir Raj; Gupta, Priti; He, Feng J; MacGregor, Graham A; Webster, Jacqui; Santos, Joseph Alvin; Krishnan, Anand; Maulik, Pallab K; Reddy, K Srinath; Gupta, Ruby; Prabhakaran, Dorairaj; Neal, Bruce
2017-11-01
To compare estimates of mean population salt intake in North and South India derived from spot urine samples versus 24-h urine collections. In a cross-sectional survey, participants were sampled from slum, urban and rural communities in North and in South India. Participants provided 24-h urine collections, and random morning spot urine samples. Salt intake was estimated from the spot urine samples using a series of established estimating equations. Salt intake data from the 24-h urine collections and spot urine equations were weighted to provide estimates of salt intake for Delhi and Haryana, and Andhra Pradesh. A total of 957 individuals provided a complete 24-h urine collection and a spot urine sample. Weighted mean salt intake based on the 24-h urine collection, was 8.59 (95% confidence interval 7.73-9.45) and 9.46 g/day (8.95-9.96) in Delhi and Haryana, and Andhra Pradesh, respectively. Corresponding estimates based on the Tanaka equation [9.04 (8.63-9.45) and 9.79 g/day (9.62-9.96) for Delhi and Haryana, and Andhra Pradesh, respectively], the Mage equation [8.80 (7.67-9.94) and 10.19 g/day (95% CI 9.59-10.79)], the INTERSALT equation [7.99 (7.61-8.37) and 8.64 g/day (8.04-9.23)] and the INTERSALT equation with potassium [8.13 (7.74-8.52) and 8.81 g/day (8.16-9.46)] were all within 1 g/day of the estimate based upon 24-h collections. For the Toft equation, estimates were 1-2 g/day higher [9.94 (9.24-10.64) and 10.69 g/day (9.44-11.93)] and for the Kawasaki equation they were 3-4 g/day higher [12.14 (11.30-12.97) and 13.64 g/day (13.15-14.12)]. In urban and rural areas in North and South India, most spot urine-based equations provided reasonable estimates of mean population salt intake. Equations that did not provide good estimates may have failed because specimen collection was not aligned with the original method.
Estimating abundance of mountain lions from unstructured spatial sampling
Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.
2012-01-01
distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.
Assessing total and volatile solids in municipal solid waste samples.
Peces, M; Astals, S; Mata-Alvarez, J
2014-01-01
Municipal solid waste is broadly generated in everyday activities and its treatment is a global challenge. Total solids (TS) and volatile solids (VS) are typical control parameters measured in biological treatments. In this study, the TS and VS were determined using the standard methods, as well as introducing some variants: (i) the drying temperature for the TS assays was 105°C, 70°C and 50°C and (ii) the VS were determined using different heating ramps from room tempature to 550°C. TS could be determined at either 105°C or 70°C, but oven residence time was tripled at 70°C, increasing from 48 to 144 h. The VS could be determined by smouldering the sample (where the sample is burnt without a flame), which avoids the release of fumes and odours in the laboratory. However, smouldering can generate undesired pyrolysis products as a consequence of carbonization, which leads to VS being underestimated. Carbonization can be avoided using slow heating ramps to prevent the oxygen limitation. Furthermore, crushing the sample cores decreased the time to reach constant weight and decreased the potential to underestimate VS.
Estimation of total body water by bioelectrical impedance analysis
International Nuclear Information System (INIS)
Kushner, R.F.; Schoeller, D.A.
1986-01-01
Total body water (TBW) measured by bioelectrical impedance analysis (BIA) was directly compared with deuterium-isotope dilution in a total of 58 subjects. First, sex-specific and group equations were developed by multiple regression analysis in (10 each) obese and nonobese men and women. Height/resistive impedance was the most significant variable used to predict deuterium-dilution space (D2O-TBW) and, combined with weight, yielded R = 0.99 and SE of estimate = 1.75 L. Equations predicted D2O-TBW equally well for obese and nonobese subjects. Second, the equations were prospectively tested in a heterogeneous group of 6 males and 12 females. Sex-specific equations predicted D2O-TBW with good correlation coefficients (0.96 and 0.93), total error (2.34 and 2.89 L), and a small difference between mean predicted and measured D2O-TBW (-1.4 +/- 2.05 and -0.48 +/- 2.83 L). BIA predicts D2O-TBW more accurately than weight, height, and/or age. A larger population is required to validate the applicability of our equations
Sample Based Unit Liter Dose Estimates
International Nuclear Information System (INIS)
JENSEN, L.
2000-01-01
The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999a) and the Final Safety Analysis Report (FSAR) (FDH 1999b) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in producing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks. The results given in this report are a revision to similar results given in an earlier version of the document (Jensen and Wilmarth 1999). The main difference between the results in this document and the earlier version is that the dose conversion factors (DCF) for converting μCi/g or μCi/L to Sv/L (sieverts per liter) have changed. There are now two DCFs, one based on ICRP-68 and one based on ICW-71 (Brevick 2000)
Sample Based Unit Liter Dose Estimates
International Nuclear Information System (INIS)
JENSEN, L.
1999-01-01
The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999) and the Final Safety Analysis Report (FSAR) (FDH 1999) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in developing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks
Estimating Aquatic Insect Populations. Introduction to Sampling.
Chihuahuan Desert Research Inst., Alpine, TX.
This booklet introduces high school and junior high school students to the major groups of aquatic insects and to population sampling techniques. Chapter 1 consists of a short field guide which can be used to identify five separate orders of aquatic insects: odonata (dragonflies and damselflies); ephemeroptera (mayflies); diptera (true flies);…
Temporally stratified sampling programs for estimation of fish impingement
International Nuclear Information System (INIS)
Kumar, K.D.; Griffith, J.S.
1977-01-01
Impingement monitoring programs often expend valuable and limited resources and fail to provide a dependable estimate of either total annual impingement or those biological and physicochemical factors affecting impingement. In situations where initial monitoring has identified ''problem'' fish species and the periodicity of their impingement, intensive sampling during periods of high impingement will maximize information obtained. We use data gathered at two nuclear generating facilities in the southeastern United States to discuss techniques of designing such temporally stratified monitoring programs and their benefits and drawbacks. Of the possible temporal patterns in environmental factors within a calendar year, differences among seasons are most influential in the impingement of freshwater fishes in the Southeast. Data on the threadfin shad (Dorosoma petenense) and the role of seasonal temperature changes are utilized as an example to demonstrate ways of most efficiently and accurately estimating impingement of the species
Poisson sampling - The adjusted and unadjusted estimator revisited
Michael S. Williams; Hans T. Schreuder; Gerardo H. Terrazas
1998-01-01
The prevailing assumption, that for Poisson sampling the adjusted estimator "Y-hat a" is always substantially more efficient than the unadjusted estimator "Y-hat u" , is shown to be incorrect. Some well known theoretical results are applicable since "Y-hat a" is a ratio-of-means estimator and "Y-hat u" a simple unbiased estimator...
Sparse covariance estimation in heterogeneous samples.
Rodríguez, Abel; Lenkoski, Alex; Dobra, Adrian
Standard Gaussian graphical models implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected from heterogeneous populations where such an assumption is not satisfied, leading in turn to nonlinear relationships among variables. To address such situations we explore mixtures of Gaussian graphical models; in particular, we consider both infinite mixtures and infinite hidden Markov models where the emission distributions correspond to Gaussian graphical models. Such models allow us to divide a heterogeneous population into homogenous groups, with each cluster having its own conditional independence structure. As an illustration, we study the trends in foreign exchange rate fluctuations in the pre-Euro era.
Estimation of the total number of mast cells in the human umbilical cord. A methodological study
DEFF Research Database (Denmark)
Engberg Damsgaard, T M; Windelborg Nielsen, B; Sørensen, Flemming Brandt
1992-01-01
The aim of the present study was to estimate the total number of mast cells in the human umbilical cord. Using 50 microns-thick paraffin sections, made from a systematic random sample of umbilical cord, the total number of mast cells per cord was estimated using a combination of the optical...... disector and fractionated sampling. The mast cell of the human umbilical cord was found in Wharton's jelly, most frequently in close proximity to the three blood vessels. No consistent pattern of variation in mast cell numbers from the fetal end of the umbilical cord towards the placenta was seen....... The total number of mast cells found in the umbilical cord was 5,200,000 (median), range 2,800,000-16,800,000 (n = 7), that is 156,000 mast cells per gram umbilical cord (median), range 48,000-267,000. Thus, the umbilical cord constitutes an adequate source of mast cells for further investigation...
Estimation of sample size and testing power (Part 4).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-01-01
Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.
International Nuclear Information System (INIS)
Ketelsen, P.; Knoechel, A.
1984-01-01
Aerosole samples on filter support were analyzed using the X-ray flourescence analytical method (Mo excitation) with totally reflecting sample carrier (TXFA). Wet decomposition of the sample material with HNO 3 in an enclosed system and subsequent sample preparation by evaporating an aliquot of the solution on the sample carrier yields detection limits up to 0.3 ng/cm 2 . The reproducibilities of the measurements of the elements K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Rb, Sr, Ba and Pb lie between 5 and 25%. Similar detection limits and reproducibilities are obtained, when low-temperature oxygen plasma is employed for the direct ashing of the homogenously covered filter on the sample carrier. For the systematic loss of elements both methods were investigated with radiotracers as well as with inactive techniques. A comparison of the results with those obtained by NAA, AAS and PIXE shows good agreement in most cases. For the bromine determination and the fast coverage of the main elements a possibility for measuring the filter membrane has been indicated, which neglects the ashing step. The corresponding detection limits are up to 3 ng/cm 2 . (orig.) [de
Bayesian Simultaneous Estimation for Means in k Sample Problems
Imai, Ryo; Kubokawa, Tatsuya; Ghosh, Malay
2017-01-01
This paper is concerned with the simultaneous estimation of k population means when one suspects that the k means are nearly equal. As an alternative to the preliminary test estimator based on the test statistics for testing hypothesis of equal means, we derive Bayesian and minimax estimators which shrink individual sample means toward a pooled mean estimator given under the hypothesis. Interestingly, it is shown that both the preliminary test estimator and the Bayesian minimax shrinkage esti...
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.
Efficient estimation for ergodic diffusions sampled at high frequency
DEFF Research Database (Denmark)
Sørensen, Michael
A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...
Estimating total 239240Pu in blow-sand mounds of two safety-shot sites
International Nuclear Information System (INIS)
Gilbert, R.O.; Essington, E.H.
1977-01-01
A study for estimating the total amount (inventory) of 239 240 Pu in blow-sand mounds at two safety-shot sites (Area 13-Project 57 on the Nellis Air Force Base and Clean Slate 3 on the Tonopah Test Range in Nevada) is described. The total amount in blow-sand mounds at these two sites is estimated to be 5.8 +- 1.3 (total +- standard error) and 10.6 +- 2.5 curies, respectively. The total 239 240 Pu in mounds plus desert pavement areas, both to a depth of 5 cm below desert pavement level, is estimated to be 39 +- 5.7 curies at the Project 57 site and 36 +- 4.8 curies at Clean Slate 3. These estimates are compared with the somewhat higher estimates of 46 +- 9 and 37 +- 5.4 curies reported that pertain to only the top 5 cm of mounds and desert pavement. The possibility is discussed that these differences are due to sampling variability arising from the skewed nature of plutonium concentrations, particularly near ground zero
Estimation of total bacteria by real-time PCR in patients with periodontal disease.
Brajović, Gavrilo; Popović, Branka; Puletić, Miljan; Kostić, Marija; Milasin, Jelena
2016-01-01
Periodontal diseases are associated with the presence of elevated levels of bacteria within the gingival crevice. The aim of this study was to evaluate a total amount of bacteria in subgingival plaque samples in patients with a periodontal disease. A quantitative evaluation of total bacteria amount using quantitative real-time polymerase chain reaction (qRT-PCR) was performed on 20 samples of patients with ulceronecrotic periodontitis and on 10 samples of healthy subjects. The estimation of total bacterial amount was based on gene copy number for 16S rRNA that was determined by comparing to Ct values/gene copy number of the standard curve. A statistically significant difference between average gene copy number of total bacteria in periodontal patients (2.55 x 10⁷) and healthy control (2.37 x 10⁶) was found (p = 0.01). Also, a trend of higher numbers of the gene copy in deeper periodontal lesions (> 7 mm) was confirmed by a positive value of coefficient of correlation (r = 0.073). The quantitative estimation of total bacteria based on gene copy number could be an important additional tool in diagnosing periodontitis.
Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won
2012-01-01
Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.
Estimation of total error in DWPF reported radionuclide inventories. Revision 1
International Nuclear Information System (INIS)
Edwards, T.B.
1995-01-01
The Defense Waste Processing Facility (DWPF) at the Savannah River Site is required to determine and report the radionuclide inventory of its glass product. For each macro-batch, the DWPF will report both the total amount (in curies) of each reportable radionuclide and the average concentration (in curies/gram of glass) of each reportable radionuclide. The DWPF is to provide the estimated error of these reported values of its radionuclide inventory as well. The objective of this document is to provide a framework for determining the estimated error in DWPF's reporting of these radionuclide inventories. This report investigates the impact of random errors due to measurement and sampling on the total amount of each reportable radionuclide in a given macro-batch. In addition, the impact of these measurement and sampling errors and process variation are evaluated to determine the uncertainty in the reported average concentrations of radionuclides in DWPF's filled canister inventory resulting from each macro-batch
Total body skeletal muscle mass: estimation by creatine (methyl-d3) dilution in humans
Walker, Ann C.; O'Connor-Semmes, Robin L.; Leonard, Michael S.; Miller, Ram R.; Stimpson, Stephen A.; Turner, Scott M.; Ravussin, Eric; Cefalu, William T.; Hellerstein, Marc K.; Evans, William J.
2014-01-01
Current methods for clinical estimation of total body skeletal muscle mass have significant limitations. We tested the hypothesis that creatine (methyl-d3) dilution (D3-creatine) measured by enrichment of urine D3-creatinine reveals total body creatine pool size, providing an accurate estimate of total body skeletal muscle mass. Healthy subjects with different muscle masses [n = 35: 20 men (19–30 yr, 70–84 yr), 15 postmenopausal women (51–62 yr, 70–84 yr)] were housed for 5 days. Optimal tracer dose was explored with single oral doses of 30, 60, or 100 mg D3-creatine given on day 1. Serial plasma samples were collected for D3-creatine pharmacokinetics. All urine was collected through day 5. Creatine and creatinine (deuterated and unlabeled) were measured by liquid chromatography mass spectrometry. Total body creatine pool size and muscle mass were calculated from D3-creatinine enrichment in urine. Muscle mass was also measured by magnetic resonance imaging (MRI), dual-energy x-ray absorptiometry (DXA), and traditional 24-h urine creatinine. D3-creatine was rapidly absorbed and cleared with variable urinary excretion. Isotopic steady-state of D3-creatinine enrichment in the urine was achieved by 30.7 ± 11.2 h. Mean steady-state enrichment in urine provided muscle mass estimates that correlated well with MRI estimates for all subjects (r = 0.868, P creatine dose determined by urine D3-creatinine enrichment provides an estimate of total body muscle mass strongly correlated with estimates from serial MRI with less bias than total lean body mass assessment by DXA. PMID:24764133
Trace determination of uranium in fertilizer samples by total ...
Indian Academy of Sciences (India)
the fertilizers is important because it can be used as fuel in nuclear reactors and also because of en- vironmental concerns. ... The amounts of uranium in four fertilizer samples of Hungarian origin were determined by ... TXRF determination of uranium from phosphate fertilizers of Hungarian origin and the preliminary results ...
Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling
Ballal, Tarig
2014-09-01
In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
an approach to estimate total dissolved solids in groundwater using
African Journals Online (AJOL)
resistivities of the aquifer delineated were subsequently used to estimate TDS in groundwater which was correlated with those ... the concentrations of these chemical constituents in the ..... TDS determined by water analysis varied between 17.
Comparison of Four Estimators under sampling without Replacement
African Journals Online (AJOL)
The results were obtained using a program written in Microsoft Visual C++ programming language. It was observed that the two-stage sampling under unequal probabilities without replacement is always better than the other three estimators considered. Keywords: Unequal probability sampling, two-stage sampling, ...
Sampling strategies for estimating brook trout effective population size
Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher
2012-01-01
The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...
The analysis of Th in the Korean total diet sample by RNAA
International Nuclear Information System (INIS)
Chung, Yong Sam; Moon, Jong Hwa; Kang, Sang Hoon; Park, Kwang Won
1999-01-01
In order to estimate the degree of intake of 232 Th through daily diet, a korean total diet sample was collected and made after the investigation of the amount of consumption of daily diet which is dependent on the ages of 20's to 50's. For Th analysis, the RNAA method was applied and NIST SRM 1575, Pine Needle was used as quality control materials. The result of the SRM analysis was compared with a certified value. The relative error was 5%. The determination of the Th in the korean total diet sample was carried out under the same analytical condition and procedure with SRM. As a result of the korean total diet sample, the concentration of Th was in 3.4 ± 0.2 ppb and the amount of daily intake of Th by the diet is found to be 0.67 g per day. Radioactivity by Th intake was estimated to be about 2.7 mBq per person per day and annual dose equivalent was revealed as 0.73 μSv per person
Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries
McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.
2013-01-01
Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.
Estimation of sample size and testing power (part 5).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-02-01
Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
Stereological Methods for Estimation of Total Number of Particles in ...
African Journals Online (AJOL)
In certain organs, like the brain, it is important to count the number of neurons associated with a particular function or region. The count gives an estimate of the electronic units available for a specific task or are endowed with a quantum of electrical energy. Similar studies can be extended in organs like the kidney, glands ...
Padilla, Alberto
2009-01-01
Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...
International Nuclear Information System (INIS)
Dayton, Elizabeth Ann; Whitacre, Shane; Holloman, Christopher
2017-01-01
As a result of impairments to fresh surface water quality due to phosphorus enrichment, substantial research effort has been put forth to quantify agricultural runoff phosphorus as related to on-field practices. While the analysis of runoff dissolved phosphorus is well prescribed and leaves little room for variability in methodology, there are several methods and variations of sample preparation reagents as well as analysis procedures for determining runoff total phosphorus. Due to the variation in methodology for determination of total phosphorus and an additional laboratory procedure required to measure suspended solids, the objectives of the current study are to i. compare the performance of three persulfate digestion methods (Acid Persulfate, USGS, and Alkaline Persulfate) for total phosphorus percent recovery across a wide range of suspended sediments (SS), and ii. evaluate the ability of using Al and/or Fe in digestion solution to predict SS as a surrogate to the traditional gravimetric method. Percent recovery of total phosphorus was determined using suspensions prepared from soils collected from 21 agricultural fields in Ohio. The Acid Persulfate method was most effective, with an average total phosphorus percent recovery of 96.6%. The second most effective method was the USGS with an average total phosphorus recovery of 76.1%. However, the Alkaline Persulfate method performed poorly with an average 24.5% total phosphorus recovery. As a result application of Alkaline Persulfate digestion to edge of field monitoring may drastically underestimated runoff total phosphorus. In addition to excellent recovery of total phosphorus, the Acid Persulfate method combined with analysis of Al and Fe by inductively coupled plasma atomic emission spectrometry provides a robust estimate of total SS. Due to the large quantity of samples that can result from water quality monitoring, an indirect measure of total SS could be very valuable when time and budget constraints limit
Comparison of distance sampling estimates to a known population ...
African Journals Online (AJOL)
Line-transect sampling was used to obtain abundance estimates of an Ant-eating Chat Myrmecocichla formicivora population to compare these with the true size of the population. The population size was determined by a long-term banding study, and abundance estimates were obtained by surveying line transects.
Estimating waste disposal quantities from raw waste samples
International Nuclear Information System (INIS)
Negin, C.A.; Urland, C.S.; Hitz, C.G.; GPU Nuclear Corp., Middletown, PA)
1985-01-01
Estimating the disposal quantity of waste resulting from stabilization of radioactive sludge is complex because of the many factors relating to sample analysis results, radioactive decay, allowable disposal concentrations, and options for disposal containers. To facilitate this estimation, a microcomputer spread sheet template was created. The spread sheet has saved considerable engineering hours. 1 fig., 3 tabs
Performance of sampling methods to estimate log characteristics for wildlife.
Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton
2004-01-01
Accurate estimation of the characteristics of log resources, or coarse woody debris (CWD), is critical to effective management of wildlife and other forest resources. Despite the importance of logs as wildlife habitat, methods for sampling logs have traditionally focused on silvicultural and fire applications. These applications have emphasized estimates of log volume...
Total evaporation estimates from a Renosterveld and dryland wheat ...
African Journals Online (AJOL)
2010-07-09
Jul 9, 2010 ... 1 CSIR Natural Resources and the Environment, PO Box 320 Stellenbosch 7599, South ... A change in land use from Renosterveld to dryland annual crops could therefore affect the soil .... Modelling total evaporation spatially: Surface Energy ..... similar, with ETo's ranging between 1.8 mm∙d-1 (on a cloudy/.
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading
Turbidity-controlled sampling for suspended sediment load estimation
Jack Lewis
2003-01-01
Abstract - Automated data collection is essential to effectively measure suspended sediment loads in storm events, particularly in small basins. Continuous turbidity measurements can be used, along with discharge, in an automated system that makes real-time sampling decisions to facilitate sediment load estimation. The Turbidity Threshold Sampling method distributes...
Estimation for small domains in double sampling for stratification ...
African Journals Online (AJOL)
In this article, we investigate the effect of randomness of the size of a small domain on the precision of an estimator of mean for the domain under double sampling for stratification. The result shows that for a small domain that cuts across various strata with unknown weights, the sampling variance depends on the within ...
Sampling strategies for efficient estimation of tree foliage biomass
Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson
2011-01-01
Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...
The use of Thompson sampling to increase estimation precision
Kaptein, M.C.
2015-01-01
In this article, we consider a sequential sampling scheme for efficient estimation of the difference between the means of two independent treatments when the population variances are unequal across groups. The sampling scheme proposed is based on a solution to bandit problems called Thompson
Small sample GEE estimation of regression parameters for longitudinal data.
Paul, Sudhir; Zhang, Xuemao
2014-09-28
Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.
An Improvement to Interval Estimation for Small Samples
Directory of Open Access Journals (Sweden)
SUN Hui-Ling
2017-02-01
Full Text Available Because it is difficult and complex to determine the probability distribution of small samples，it is improper to use traditional probability theory to process parameter estimation for small samples. Bayes Bootstrap method is always used in the project. Although，the Bayes Bootstrap method has its own limitation，In this article an improvement is given to the Bayes Bootstrap method，This method extended the amount of samples by numerical simulation without changing the circumstances in a small sample of the original sample. And the new method can give the accurate interval estimation for the small samples. Finally，by using the Monte Carlo simulation to model simulation to the specific small sample problems. The effectiveness and practicability of the Improved-Bootstrap method was proved.
Estimation of total alkaloid in Chitrakadivati by UV-Spectrophotometer.
Ajanal, Manjunath; Gundkalle, Mahadev B; Nayak, Shradda U
2012-04-01
Herbal formulation standardization by adopting newer technique is need of the hour in the field of Ayurvedic pharmaceutical industry. As very few reports exist. These kind of studies would certainly widen the herbal research area. Chitrakadivati is one such popular herbal formulation used in Ayurveda. Many of its ingredients are known for presence of alkaloids. Presence of alkaloid was tested qualitatively by Dragondroff's method then subjected to quantitative estimation by UV-Spectrophotometer. This method is based on the reaction between alkaloid and bromocresol green (BCG). Study discloses that out of 16 ingredients, 9 contain alkaloid. Chitrakadivati has shown 0.16% of concentration of alkaloid and which is significantly higher than it's individual ingredients.
Effects of systematic sampling on satellite estimates of deforestation rates
International Nuclear Information System (INIS)
Steininger, M K; Godoy, F; Harper, G
2009-01-01
Options for satellite monitoring of deforestation rates over large areas include the use of sampling. Sampling may reduce the cost of monitoring but is also a source of error in estimates of areas and rates. A common sampling approach is systematic sampling, in which sample units of a constant size are distributed in some regular manner, such as a grid. The proposed approach for the 2010 Forest Resources Assessment (FRA) of the UN Food and Agriculture Organization (FAO) is a systematic sample of 10 km wide squares at every 1 deg. intersection of latitude and longitude. We assessed the outcome of this and other systematic samples for estimating deforestation at national, sub-national and continental levels. The study is based on digital data on deforestation patterns for the five Amazonian countries outside Brazil plus the Brazilian Amazon. We tested these schemes by varying sample-unit size and frequency. We calculated two estimates of sampling error. First we calculated the standard errors, based on the size, variance and covariance of the samples, and from this calculated the 95% confidence intervals (CI). Second, we calculated the actual errors, based on the difference between the sample-based estimates and the estimates from the full-coverage maps. At the continental level, the 1 deg., 10 km scheme had a CI of 21% and an actual error of 8%. At the national level, this scheme had CIs of 126% for Ecuador and up to 67% for other countries. At this level, increasing sampling density to every 0.25 deg. produced a CI of 32% for Ecuador and CIs of up to 25% for other countries, with only Brazil having a CI of less than 10%. Actual errors were within the limits of the CIs in all but two of the 56 cases. Actual errors were half or less of the CIs in all but eight of these cases. These results indicate that the FRA 2010 should have CIs of smaller than or close to 10% at the continental level. However, systematic sampling at the national level yields large CIs unless the
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses
Estimation of creatinine in Urine sample by Jaffe's method
International Nuclear Information System (INIS)
Wankhede, Sonal; Arunkumar, Suja; Sawant, Pramilla D.; Rao, B.B.
2012-01-01
In-vitro bioassay monitoring is based on the determination of activity concentrations in biological samples excreted from the body and is most suitable for alpha and beta emitters. A truly representative bioassay sample is the one having all the voids collected during a 24-h period however, this being technically difficult, overnight urine samples collected by the workers are analyzed. These overnight urine samples are collected for 10-16 h, however in the absence of any specific information, 12 h duration is assumed and the observed results are then corrected accordingly obtain the daily excretion rate. To reduce the uncertainty due to unknown duration of sample collection, IAEA has recommended two methods viz., measurement of specific gravity and creatinine excretion rate in urine sample. Creatinine is a final metabolic product creatinine phosphate in the body and is excreted at a steady rate for people with normally functioning kidneys. It is, therefore, often used as a normalization factor for estimation of duration of sample collection. The present study reports the chemical procedure standardized and its application for the estimation of creatinine in urine samples collected from occupational workers. Chemical procedure for estimation of creatinine in bioassay samples was standardized and applied successfully for its estimation in bioassay samples collected from the workers. The creatinine excretion rate observed for these workers is lower than observed in literature. Further, work is in progress to generate a data bank of creatinine excretion rate for most of the workers and also to study the variability in creatinine coefficient for the same individual based on the analysis of samples collected for different duration
Harry T. Valentine; David L. R. Affleck; Timothy G. Gregoire
2009-01-01
Systematic sampling is easy, efficient, and widely used, though it is not generally recognized that a systematic sample may be drawn from the population of interest with or without restrictions on randomization. The restrictions or the lack of them determine which estimators are unbiased, when using the sampling design as the basis for inference. We describe the...
Estimates and sampling schemes for the instrumentation of accountability systems
International Nuclear Information System (INIS)
Jewell, W.S.; Kwiatkowski, J.W.
1976-10-01
The problem of estimation of a physical quantity from a set of measurements is considered, where the measurements are made on samples with a hierarchical error structure, and where within-groups error variances may vary from group to group at each level of the structure; minimum mean squared-error estimators are developed, and the case where the physical quantity is a random variable with known prior mean and variance is included. Estimators for the error variances are also given, and optimization of experimental design is considered
Evaluation of sampling strategies to estimate crown biomass
Krishna P Poudel; Hailemariam Temesgen; Andrew N Gray
2015-01-01
Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire...
Goyet, Catherine; Davis, Daniel; Peltzer, Edward T.; Brewer, Peter G.
1995-01-01
Large-scale ocean observing programs such as the Joint Global Ocean Flux Study (JGOFS) and the World Ocean Circulation Experiment (WOCE) today, must face the problem of designing an adequate sampling strategy. For ocean chemical variables, the goals and observing technologies are quite different from ocean physical variables (temperature, salinity, pressure). We have recently acquired data on the ocean CO2 properties on WOCE cruises P16c and P17c that are sufficiently dense to test for sampling redundancy. We use linear and quadratic interpolation methods on the sampled field to investigate what is the minimum number of samples required to define the deep ocean total inorganic carbon (TCO2) field within the limits of experimental accuracy (+/- 4 micromol/kg). Within the limits of current measurements, these lines were oversampled in the deep ocean. Should the precision of the measurement be improved, then a denser sampling pattern may be desirable in the future. This approach rationalizes the efficient use of resources for field work and for estimating gridded (TCO2) fields needed to constrain geochemical models.
Effects of sample size on estimates of population growth rates calculated with matrix models.
Directory of Open Access Journals (Sweden)
Ian J Fiske
Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
Estimating fluvial wood discharge from timelapse photography with varying sampling intervals
Anderson, N. K.
2013-12-01
There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.
Energy Technology Data Exchange (ETDEWEB)
Vieira, Mariana A.; Ribeiro, Anderson S.; Curtius, Adilson J. [Universidade Federal de Santa Catarina, Departamento de Quimica, Florianopolis, SC (Brazil); Sturgeon, Ralph E. [National Research Council Canada, Institute for National Measurement Standards, Ottawa, ON (Canada)
2007-06-15
Cold vapor atomic absorption spectrometry (CV-AAS) based on photochemical reduction by exposure to UV radiation is described for the determination of methylmercury and total mercury in biological samples. Two approaches were investigated: (a) tissues were digested in either formic acid or tetramethylammonium hydroxide (TMAH), and total mercury was determined following reduction of both species by exposure of the solution to UV irradiation; (b) tissues were solubilized in TMAH, diluted to a final concentration of 0.125% m/v TMAH by addition of 10% v/v acetic acid and CH{sub 3}Hg{sup +} was selectively quantitated, or the initial digests were diluted to 0.125% m/v TMAH by addition of deionized water, adjusted to pH 0.3 by addition of HCl and CH{sub 3}Hg{sup +} was selectively quantitated. For each case, the optimum conditions for photochemical vapor generation (photo-CVG) were investigated. The photochemical reduction efficiency was estimated to be {proportional_to}95% by comparing the response with traditional SnCl{sub 2} chemical reduction. The method was validated by analysis of several biological Certified Reference Materials, DORM-1, DORM-2, DOLT-2 and DOLT-3, using calibration against aqueous solutions of Hg{sup 2+}; results showed good agreement with the certified values for total and methylmercury in all cases. Limits of detection of 6 ng/g for total mercury using formic acid, 8 ng/g for total mercury and 10 ng/g for methylmercury using TMAH were obtained. The proposed methodology is sensitive, simple and inexpensive, and promotes ''green'' chemistry. The potential for application to other sample types and analytes is evident. (orig.)
International Nuclear Information System (INIS)
Skalski, J.R.; Hoffman, A.; Ransom, B.H.; Steig, T.W.
1993-01-01
Five alternate sampling designs are compared using 15 d of 24-h continuous hydroacoustic data to identify the most favorable approach to fixed-location hydroacoustic monitoring of salmonid outmigrants. Four alternative aproaches to systematic sampling are compared among themselves and with stratified random sampling (STRS). Stratifying systematic sampling (STSYS) on a daily basis is found to reduce sampling error in multiday monitoring studies. Although sampling precision was predictable with varying levels of effort in STRS, neither magnitude nor direction of change in precision was predictable when effort was varied in systematic sampling (SYS). Furthermore, modifying systematic sampling to include replicated (e.g., nested) sampling (RSYS) is further shown to provide unbiased point and variance estimates as does STRS. Numerous short sampling intervals (e.g., 12 samples of 1-min duration per hour) must be monitored hourly using RSYS to provide efficient, unbiased point and interval estimates. For equal levels of effort, STRS outperformed all variations of SYS examined. Parametric approaches to confidence interval estimates are found to be superior to nonparametric interval estimates (i.e., bootstrap and jackknife) in estimating total fish passage. 10 refs., 1 fig., 8 tabs
Evaluation of design flood estimates with respect to sample size
Kobierska, Florian; Engeland, Kolbjorn
2016-04-01
Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.
Influence of Sampling Effort on the Estimated Richness of Road-Killed Vertebrate Wildlife
Bager, Alex; da Rosa, Clarissa A.
2011-05-01
Road-killed mammals, birds, and reptiles were collected weekly from highways in southern Brazil in 2002 and 2005. The objective was to assess variation in estimates of road-kill impacts on species richness produced by different sampling efforts, and to provide information to aid in the experimental design of future sampling. Richness observed in weekly samples was compared with sampling for different periods. In each period, the list of road-killed species was evaluated based on estimates the community structure derived from weekly samplings, and by the presence of the ten species most subject to road mortality, and also of threatened species. Weekly samples were sufficient only for reptiles and mammals, considered separately. Richness estimated from the biweekly samples was equal to that found in the weekly samples, and gave satisfactory results for sampling the most abundant and threatened species. The ten most affected species showed constant road-mortality rates, independent of sampling interval, and also maintained their dominance structure. Birds required greater sampling effort. When the composition of road-killed species varies seasonally, it is necessary to take biweekly samples for a minimum of one year. Weekly or more-frequent sampling for periods longer than two years is necessary to provide a reliable estimate of total species richness.
inverse gaussian model for small area estimation via gibbs sampling
African Journals Online (AJOL)
ADMIN
For example, MacGibbon and Tomberlin. (1989) have considered estimating small area rates and binomial parameters using empirical Bayes methods. Stroud (1991) used hierarchical Bayes approach for univariate natural exponential families with quadratic variance functions in sample survey applications, while Chaubey ...
Estimation of river and stream temperature trends under haphazard sampling
Gray, Brian R.; Lyubchich, Vyacheslav; Gel, Yulia R.; Rogala, James T.; Robertson, Dale M.; Wei, Xiaoqiao
2015-01-01
Long-term temporal trends in water temperature in rivers and streams are typically estimated under the assumption of evenly-spaced space-time measurements. However, sampling times and dates associated with historical water temperature datasets and some sampling designs may be haphazard. As a result, trends in temperature may be confounded with trends in time or space of sampling which, in turn, may yield biased trend estimators and thus unreliable conclusions. We address this concern using multilevel (hierarchical) linear models, where time effects are allowed to vary randomly by day and date effects by year. We evaluate the proposed approach by Monte Carlo simulations with imbalance, sparse data and confounding by trend in time and date of sampling. Simulation results indicate unbiased trend estimators while results from a case study of temperature data from the Illinois River, USA conform to river thermal assumptions. We also propose a new nonparametric bootstrap inference on multilevel models that allows for a relatively flexible and distribution-free quantification of uncertainties. The proposed multilevel modeling approach may be elaborated to accommodate nonlinearities within days and years when sampling times or dates typically span temperature extremes.
Estimation of total amounts of anthropogenic radionuclides in the Japan Sea
International Nuclear Information System (INIS)
Ito, Toshimichi; Otosaka, Shigeyoshi; Kawamura, Hideyuki
2007-01-01
We estimated the total amounts of anthropogenic radionuclides, consisting of 90 Sr, 137 Cs, and 239+240 Pu, in the Japan Sea for the first time based on experimental data on their concentrations in seawater and seabed sediment. The radionuclide inventories in seawater and seabed sediment at each sampling site varied depending on the water depth, with total inventories for 90 Sr, 137 Cs, and 239+240 Pu in the range of 0.52-2.8 kBq m -2 , 0.64-4.1 kBq m -2 , and 27-122 Bq m -2 , respectively. Based on the relationship between the inventories and the water depths, the total amounts in the Japan Sea were estimated to be about 1.2±0.4 PBq for 90 Sr, 1.8±0.7 PBq for 137 Cs, and 69±14 TBq for 239+240 Pu, respectively; the amount ratio, 90 Sr: 137 Cs: 239+240 Pu, was 1.0:1.6:0.059. The amounts of 90 Sr and 137 Cs in the Japan Sea were in balance with those supplied from global fallout, whereas the amount of 239+240 Pu exceeded that supplied by fallout by nearly 40%. These results suggest a preferential accumulation of the plutonium isotopes. The data used in this study were obtained through a wide-area research project, named the 'Japan Sea expeditions (phase I),' covering the Japanese and Russian exclusive economic zones. (author)
Comparison of sampling techniques for Bayesian parameter estimation
Allison, Rupert; Dunkley, Joanna
2014-02-01
The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.
Statistical Methods and Sampling Design for Estimating Step Trends in Surface-Water Quality
Hirsch, Robert M.
1988-01-01
This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The J. L. Hodges-E. L. Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies is examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods.
Estimating rare events in biochemical systems using conditional sampling
Sundar, V. S.
2017-01-01
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
RANKED SET SAMPLING FOR ECOLOGICAL RESEARCH: ACCOUNTING FOR THE TOTAL COSTS OF SAMPLING
Researchers aim to design environmental studies that optimize precision and allow for generalization of results, while keeping the costs of associated field and laboratory work at a reasonable level. Ranked set sampling is one method to potentially increase precision and reduce ...
Sánchez, Carlos; Lienemann, Charles-Philippe; Todolí, Jose-Luis
2016-10-01
Bioethanol real samples have been directly analyzed through ICP-MS by means of the so called High Temperature Torch Integrated Sample Introduction System (hTISIS). Because bioethanol samples may contain water, experiments have been carried out in order to determine the effect of ethanol concentration on the ICP-MS response. The ethanol content studied went from 0 to 50%, because higher alcohol concentrations led to carbon deposits on the ICP-MS interface. The spectrometer default spray chamber (double pass) equipped with a glass concentric pneumatic micronebulizer has been taken as the reference system. Two flow regimes have been evaluated: continuous sample aspiration at 25 μL min- 1 and 5 μL air-segmented sample injection. hTISIS temperature has been shown to be critical, in fact ICP-MS sensitivity increased with this variable up to 100-200 °C depending on the solution tested. Higher chamber temperatures led to either a drop in signal or a plateau. Compared with the reference system, the hTISIS improved the sensitivities by a factor included within the 4 to 8 range while average detection limits were 6 times lower for the latter device. Regarding the influence of the ethanol concentration on sensitivity, it has been observed that an increase in the temperature was not enough to eliminate the interferences. It was also necessary to modify the torch position with respect to the ICP-MS interface to overcome them. This fact was likely due to the different extent of ion plasma radial diffusion encountered as a function of the matrix when working at high chamber temperatures. When the torch was moved 1 mm plasma down axis, ethanolic and aqueous solutions provided statistically equal sensitivities. A preconcentration procedure has been applied in order to validate the methodology. It has been found that, under optimum conditions from the point of view of matrix effects, recoveries for spiked samples were close to 100%. Furthermore, analytical concentrations for real
Saleh, Anas; Small, Travis; Chandran Pillai, Aiswarya Lekshmi Pillai; Schiltz, Nicholas K; Klika, Alison K; Barsoum, Wael K
2014-09-17
The large-scale utilization of allogenic blood transfusion and its associated outcomes have been described in critically ill patients and those undergoing high-risk cardiac surgery but not in patients undergoing elective total hip arthroplasty. The objective of this study was to determine the trends in utilization and outcomes of allogenic blood transfusion in patients undergoing primary total hip arthroplasty in the United States from 2000 to 2009. An observational cohort of 2,087,423 patients who underwent primary total hip arthroplasty from 2000 to 2009 was identified in the Nationwide Inpatient Sample. International Classification of Diseases, Ninth Revision, Clinical Modification procedure codes 99.03 and 99.04 were used to identify patients who received allogenic blood products during their hospital stay. Risk factors for allogenic transfusions were identified with use of multivariable logistic regression models. We used propensity score matching to estimate the adjusted association between transfusion and surgical outcomes. The rate of allogenic blood transfusion increased from 11.8% in 2000 to 19.0% in 2009. Patient-related risk factors for receiving an allogenic blood transfusion include an older age, female sex, black race, and Medicaid insurance. Hospital-related risk factors include rural location, smaller size, and non-academic status. After adjusting for confounders, allogenic blood transfusion was associated with a longer hospital stay (0.58 ± 0.02 day; p conservation methods. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.
Estimation of tritium activity in bioassay samples having chemiluminescence
International Nuclear Information System (INIS)
Dwivedi, R.K.; Manu, Kumar; Kumar, Vinay; Soni, Ashish; Kaushik, A.K.; Tiwari, S.K.; Gupta, Ashok
2008-01-01
Tritium is recognized as major internal dose contributor in PHWR type of reactors. Estimation of internal dose due to tritium is carried out by analyzing urine samples in liquid scintillation analyzer (LSA). Presence of residual biochemical species in urine samples of some individuals under medical administration shows significant amount of chemiluminescence. If appropriate care is not taken the results obtained by liquid scintillation counter may be mistaken as genuine uptake of tritium. The distillation method described in this paper is used at RAPS-3 and 4 to assess correct tritium uptake. (author)
Estimation of sample size and testing power (Part 3).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2011-12-01
This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.
Estimation of sample size and testing power (part 6).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-03-01
The design of one factor with k levels (k ≥ 3) refers to the research that only involves one experimental factor with k levels (k ≥ 3), and there is no arrangement for other important non-experimental factors. This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k ≥ 3).
Kronholm, Scott C.; Capel, Paul D.; Terziotti, Silvia
2016-01-01
Accurate estimation of total nitrogen loads is essential for evaluating conditions in the aquatic environment. Extrapolation of estimates beyond measured streams will greatly expand our understanding of total nitrogen loading to streams. Recursive partitioning and random forest regression were used to assess 85 geospatial, environmental, and watershed variables across 636 small (monitoring may be beneficial.
Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests
Directory of Open Access Journals (Sweden)
Bruno Giacomini Sari
2017-09-01
Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.
Lasky, Tamar; Sun, Wenyu; Kadry, Abdel; Hoffman, Michael K
2004-01-01
The purpose of this study was to estimate mean concentrations of total arsenic in chicken liver tissue and then estimate total and inorganic arsenic ingested by humans through chicken consumption. We used national monitoring data from the Food Safety and Inspection Service National Residue Program to estimate mean arsenic concentrations for 1994-2000. Incorporating assumptions about the concentrations of arsenic in liver and muscle tissues as well as the proportions of inorganic and organic a...
The Sandia total-dose estimator: SANDOSE description and user guide
International Nuclear Information System (INIS)
Turner, C.D.
1995-02-01
The SANdia total-DOSe Estimator (SANDOSE) is used to estimate total radiation dose to a (BRL-CAT) solid model, SANDOSE uses the mass-sectoring technique to sample the model using ray-tracing techniques. The code is integrated directly into the BRL-CAD solid model editor and is operated using a simple graphical user interface. Several diagnostic tools are available to allow the user to analyze the results. Based on limited validation using several benchmark problems, results can be expected to fall between a 10% underestimate and a factor of 2 overestimate of the actual dose predicted by rigorous radiation transport techniques. However, other situations may be encountered where the results might fall outside of this range. The code is written in C and uses X-windows graphics. It presently runs on SUN SPARCstations, but in theory could be ported to any workstation with a C compiler and X-windows. SANDOSE is available via license by contacting either the Sandia National Laboratories Technology Transfer Center or the author
DEFF Research Database (Denmark)
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
Quantification of tissue properties is improved using the general proportionator sampling and estimation procedure: automatic image analysis and non-uniform sampling with probability proportional to size (PPS). The complete region of interest is partitioned into fields of view, and every field...... of view is given a weight (the size) proportional to the total amount of requested image analysis features in it. The fields of view sampled with known probabilities proportional to individual weight are the only ones seen by the observer who provides the correct count. Even though the image analysis...... cerebellum, total number of orexin positive neurons in transgenic mice brain, and estimating the absolute area and the areal fraction of β islet cells in dog pancreas. The proportionator was at least eight times more efficient (precision and time combined) than traditional computer controlled sampling....
A method for estimating radioactive cesium concentrations in cattle blood using urine samples.
Sato, Itaru; Yamagishi, Ryoma; Sasaki, Jun; Satoh, Hiroshi; Miura, Kiyoshi; Kikuchi, Kaoru; Otani, Kumiko; Okada, Keiji
2017-12-01
In the region contaminated by the Fukushima nuclear accident, radioactive contamination of live cattle should be checked before slaughter. In this study, we establish a precise method for estimating radioactive cesium concentrations in cattle blood using urine samples. Blood and urine samples were collected from a total of 71 cattle on two farms in the 'difficult-to-return zone'. Urine 137 Cs, specific gravity, electrical conductivity, pH, sodium, potassium, calcium, and creatinine were measured and various estimation methods for blood 137 Cs were tested. The average error rate of the estimation was 54.2% without correction. Correcting for urine creatinine, specific gravity, electrical conductivity, or potassium improved the precision of the estimation. Correcting for specific gravity using the following formula gave the most precise estimate (average error rate = 16.9%): [blood 137 Cs] = [urinary 137 Cs]/([specific gravity] - 1)/329. Urine samples are faster to measure than blood samples because urine can be obtained in larger quantities and has a higher 137 Cs concentration than blood. These advantages of urine and the estimation precision demonstrated in our study, indicate that estimation of blood 137 Cs using urine samples is a practical means of monitoring radioactive contamination in live cattle. © 2017 Japanese Society of Animal Science.
Directory of Open Access Journals (Sweden)
Ascaso Carlos
2010-04-01
Full Text Available Abstract Background In an agreement assay, it is of interest to evaluate the degree of agreement between the different methods (devices, instruments or observers used to measure the same characteristic. We propose in this study a technical simplification for inference about the total deviation index (TDI estimate to assess agreement between two devices of normally-distributed measurements and describe its utility to evaluate inter- and intra-rater agreement if more than one reading per subject is available for each device. Methods We propose to estimate the TDI by constructing a probability interval of the difference in paired measurements between devices, and thereafter, we derive a tolerance interval (TI procedure as a natural way to make inferences about probability limit estimates. We also describe how the proposed method can be used to compute bounds of the coverage probability. Results The approach is illustrated in a real case example where the agreement between two instruments, a handle mercury sphygmomanometer device and an OMRON 711 automatic device, is assessed in a sample of 384 subjects where measures of systolic blood pressure were taken twice by each device. A simulation study procedure is implemented to evaluate and compare the accuracy of the approach to two already established methods, showing that the TI approximation produces accurate empirical confidence levels which are reasonably close to the nominal confidence level. Conclusions The method proposed is straightforward since the TDI estimate is derived directly from a probability interval of a normally-distributed variable in its original scale, without further transformations. Thereafter, a natural way of making inferences about this estimate is to derive the appropriate TI. Constructions of TI based on normal populations are implemented in most standard statistical packages, thus making it simpler for any practitioner to implement our proposal to assess agreement.
Sampling strategy for estimating human exposure pathways to consumer chemicals
Directory of Open Access Journals (Sweden)
Eleni Papadopoulou
2016-03-01
Full Text Available Human exposure to consumer chemicals has become a worldwide concern. In this work, a comprehensive sampling strategy is presented, to our knowledge being the first to study all relevant exposure pathways in a single cohort using multiple methods for assessment of exposure from each exposure pathway. The selected groups of chemicals to be studied are consumer chemicals whose production and use are currently in a state of transition and are; per- and polyfluorinated alkyl substances (PFASs, traditional and “emerging” brominated flame retardants (BFRs and EBFRs, organophosphate esters (OPEs and phthalate esters (PEs. Information about human exposure to these contaminants is needed due to existing data gaps on human exposure intakes from multiple exposure pathways and relationships between internal and external exposure. Indoor environment, food and biological samples were collected from 61 participants and their households in the Oslo area (Norway on two consecutive days, during winter 2013-14. Air, dust, hand wipes, and duplicate diet (food and drink samples were collected as indicators of external exposure, and blood, urine, blood spots, hair, nails and saliva as indicators of internal exposure. A food diary, food frequency questionnaire (FFQ and indoor environment questionnaire were also implemented. Approximately 2000 samples were collected in total and participant views on their experiences of this campaign were collected via questionnaire. While 91% of our participants were positive about future participation in a similar project, some tasks were viewed as problematic. Completing the food diary and collection of duplicate food/drink portions were the tasks most frequent reported as “hard”/”very hard”. Nevertheless, a strong positive correlation between the reported total mass of food/drinks in the food record and the total weight of the food/drinks in the collection bottles was observed, being an indication of accurate performance
Increasing fMRI sampling rate improves Granger causality estimates.
Directory of Open Access Journals (Sweden)
Fa-Hsuan Lin
Full Text Available Estimation of causal interactions between brain areas is necessary for elucidating large-scale functional brain networks underlying behavior and cognition. Granger causality analysis of time series data can quantitatively estimate directional information flow between brain regions. Here, we show that such estimates are significantly improved when the temporal sampling rate of functional magnetic resonance imaging (fMRI is increased 20-fold. Specifically, healthy volunteers performed a simple visuomotor task during blood oxygenation level dependent (BOLD contrast based whole-head inverse imaging (InI. Granger causality analysis based on raw InI BOLD data sampled at 100-ms resolution detected the expected causal relations, whereas when the data were downsampled to the temporal resolution of 2 s typically used in echo-planar fMRI, the causality could not be detected. An additional control analysis, in which we SINC interpolated additional data points to the downsampled time series at 0.1-s intervals, confirmed that the improvements achieved with the real InI data were not explainable by the increased time-series length alone. We therefore conclude that the high-temporal resolution of InI improves the Granger causality connectivity analysis of the human brain.
TruSeq Stranded mRNA and Total RNA Sample Preparation Kits
Total RNA-Seq enabled by ribosomal RNA (rRNA) reduction is compatible with formalin-fixed paraffin embedded (FFPE) samples, which contain potentially critical biological information. The family of TruSeq Stranded Total RNA sample preparation kits provides a unique combination of unmatched data quality for both mRNA and whole-transcriptome analyses, robust interrogation of both standard and low-quality samples and workflows compatible with a wide range of study designs.
Directory of Open Access Journals (Sweden)
Martinásková Magdalena
2017-12-01
Full Text Available The article examines the use of Asymptotic Sampling (AS for the estimation of failure probability. The AS algorithm requires samples of multidimensional Gaussian random vectors, which may be obtained by many alternative means that influence the performance of the AS method. Several reliability problems (test functions have been selected in order to test AS with various sampling schemes: (i Monte Carlo designs; (ii LHS designs optimized using the Periodic Audze-Eglājs (PAE criterion; (iii designs prepared using Sobol’ sequences. All results are compared with the exact failure probability value.
Beamforming using subspace estimation from a diagonally averaged sample covariance.
Quijano, Jorge E; Zurk, Lisa M
2017-08-01
The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.
International Nuclear Information System (INIS)
Simabuco, S.M.; Matsumoto, E.; Jesus, E.F.O.; Lopes, R.T.; Perez, C.; Nascimento Filho, V.F.; Costa, R.S.S.; Tavares do Carmo, M.G.; Saunders, C.
2001-01-01
Full text: The Total Reflection X-ray Fluorescence has been applied for trace elements in water and aqueous solutions, environmental samples and biological materials after sample preparation and to surface analysis of silicon wafers. The present paper shows some results of applications for rainwater, atmospheric particulate material, colostrum and nuclear samples. (author)
Estimation of salt intake from spot urine samples in patients with chronic kidney disease
Directory of Open Access Journals (Sweden)
Ogura Makoto
2012-06-01
Full Text Available Abstract Background High salt intake in patients with chronic kidney disease (CKD may cause high blood pressure and increased albuminuria. Although, the estimation of salt intake is essential, there are no easy methods to estimate real salt intake. Methods Salt intake was assessed by determining urinary sodium excretion from the collected urine samples. Estimation of salt intake by spot urine was calculated by Tanaka’s formula. The correlation between estimated and measured sodium excretion was evaluated by Pearson´s correlation coefficients. Performance of equation was estimated by median bias, interquartile range (IQR, proportion of estimates within 30% deviation of measured sodium excretion (P30 and root mean square error (RMSE.The sensitivity and specificity of estimated against measured sodium excretion were separately assessed by receiver-operating characteristic (ROC curves. Results A total of 334 urine samples from 96 patients were examined. Mean age was 58 ± 16 years, and estimated glomerular filtration rate (eGFR was 53 ± 27 mL/min. Among these patients, 35 had CKD stage 1 or 2, 39 had stage 3, and 22 had stage 4 or 5. Estimated sodium excretion significantly correlated with measured sodium excretion (R = 0.52, P 170 mEq/day (AUC 0.835. Conclusions The present study demonstrated that spot urine can be used to estimate sodium excretion, especially in patients with low eGFR.
Directory of Open Access Journals (Sweden)
Hon-Cheong So
2010-11-01
Full Text Available Recently genome-wide association studies (GWAS have identified numerous susceptibility variants for complex diseases. In this study we proposed several approaches to estimate the total number of variants underlying these diseases. We assume that the variance explained by genetic markers (Vg follow an exponential distribution, which is justified by previous studies on theories of adaptation. Our aim is to fit the observed distribution of Vg from GWAS to its theoretical distribution. The number of variants is obtained by the heritability divided by the estimated mean of the exponential distribution. In practice, due to limited sample sizes, there is insufficient power to detect variants with small effects. Therefore the power was taken into account in fitting. Besides considering the most significant variants, we also tried to relax the significance threshold, allowing more markers to be fitted. The effects of false positive variants were removed by considering the local false discovery rates. In addition, we developed an alternative approach by directly fitting the z-statistics from GWAS to its theoretical distribution. In all cases, the "winner's curse" effect was corrected analytically. Confidence intervals were also derived. Simulations were performed to compare and verify the performance of different estimators (which incorporates various means of winner's curse correction and the coverage of the proposed analytic confidence intervals. Our methodology only requires summary statistics and is able to handle both binary and continuous traits. Finally we applied the methods to a few real disease examples (lipid traits, type 2 diabetes and Crohn's disease and estimated that hundreds to nearly a thousand variants underlie these traits.
International Nuclear Information System (INIS)
Nanda, Braja B.; Acharya, R.
2017-01-01
Total aluminium contents in various food samples were determined by Instrumental Neutron Activation Analysis (INAA) and Particle Induced Gamma-ray Emission (PIGE) methods. A total of 16 rice samples, collected from the field, were analyzed by INAA using reactor neutrons from Dhruva reactor. Whereas a total 17 spices collected from market, were analyzed by both INAA and PIGE methods in conjunction with high resolution gamma-ray spectrometry. Aluminium concentration values were found to be in the range of 19-845 mg kg -1 for spices and 15-104 mg kg -1 for rice samples. The methods were validated by analyzing standard reference materials (SRMs) form NIST. (author)
Tseng, C. M.; Garraud, H.; Amouroux, D.; Donard, O. F. X.; de Diego, A.
1998-01-01
This paper describes rapid, simple microwave-assisted leaching/ digestion procedures for total and mercury species determination in sediment samples and biomaterials. An open focused microwave system allowed the sample preparation time to be dramatically reduced to only 24 min when a power of 40-80 W was applied. Quantitative leaching of methylmercury from sediments by HNO3 solution and complete dissolution of biomaterials by an alkaline solution, such as 25% TMAH solution, were obtained. Met...
Method for estimating modulation transfer function from sample images.
Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta
2018-02-01
The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.
Photometry-based estimation of the total number of stars in the Universe.
Manojlović, Lazo M
2015-07-20
A novel photometry-based estimation of the total number of stars in the Universe is presented. The estimation method is based on the energy conservation law and actual measurements of the extragalactic background light levels. By assuming that every radiated photon is kept within the Universe volume, i.e., by approximating the Universe as an integrating cavity without losses, the total number of stars in the Universe of about 6×1022 has been obtained.
International Nuclear Information System (INIS)
Johnson, K.; Lucas, R.
1986-12-01
In developing a methodology for assessing potential sites for the disposal of radioactive wastes, the Department of the Environment has conducted a series of trial assessment exercises. In order to produce converged estimates of radiological risk using the SYVAC A/C simulation system an efficient sampling procedure is required. Previous work has demonstrated that importance sampling can substantially increase sampling efficiency. This study used importance sampling to produce converged estimates of risk for the first DoE trial assessment. Four major nuclide chains were analysed. In each case importance sampling produced converged risk estimates with between 10 and 170 times fewer runs of the SYVAC A/C model. This increase in sampling efficiency can reduce the total elapsed time required to obtain a converged estimate of risk from one nuclide chain by a factor of 20. The results of this study suggests that the use of importance sampling could reduce the elapsed time required to perform a risk assessment of a potential site by a factor of ten. (author)
McCarthy, David T; Zhang, Kefeng; Westerlund, Camilla; Viklander, Maria; Bertrand-Krajewski, Jean-Luc; Fletcher, Tim D; Deletic, Ana
2018-02-01
The estimation of stormwater pollutant concentrations is a primary requirement of integrated urban water management. In order to determine effective sampling strategies for estimating pollutant concentrations, data from extensive field measurements at seven different catchments was used. At all sites, 1-min resolution continuous flow measurements, as well as flow-weighted samples, were taken and analysed for total suspend solids (TSS), total nitrogen (TN) and Escherichia coli (E. coli). For each of these parameters, the data was used to calculate the Event Mean Concentrations (EMCs) for each event. The measured Site Mean Concentrations (SMCs) were taken as the volume-weighted average of these EMCs for each parameter, at each site. 17 different sampling strategies, including random and fixed strategies were tested to estimate SMCs, which were compared with the measured SMCs. The ratios of estimated/measured SMCs were further analysed to determine the most effective sampling strategies. Results indicate that the random sampling strategies were the most promising method in reproducing SMCs for TSS and TN, while some fixed sampling strategies were better for estimating the SMC of E. coli. The differences in taking one, two or three random samples were small (up to 20% for TSS, and 10% for TN and E. coli), indicating that there is little benefit in investing in collection of more than one sample per event if attempting to estimate the SMC through monitoring of multiple events. It was estimated that an average of 27 events across the studied catchments are needed for characterising SMCs of TSS with a 90% confidence interval (CI) width of 1.0, followed by E.coli (average 12 events) and TN (average 11 events). The coefficient of variation of pollutant concentrations was linearly and significantly correlated to the 90% confidence interval ratio of the estimated/measured SMCs (R 2 = 0.49; P sampling frequency needed to accurately estimate SMCs of pollutants. Crown
Paulson, Anthony J.; Conn, Kathleen E.; DeWild, John F.
2013-01-01
Previous investigations examined sources and sinks of mercury to Sinclair Inlet based on historic and new data. This included an evaluation of mercury concentrations from various sources and mercury loadings from industrial discharges and groundwater flowing from the Bremerton naval complex to Sinclair Inlet. This report provides new data from four potential sources of mercury to Sinclair Inlet: (1) filtered and particulate total mercury concentrations of creek water during the wet season, (2) filtered and particulate total mercury releases from the Navy steam plant following changes in the water softening process and discharge operations, (3) release of mercury from soils to groundwater in two landfill areas at the Bremerton naval complex, and (4) total mercury concentrations of solids in dry dock sumps that were not affected by bias from sequential sampling. The previous estimate of the loading of filtered total mercury from Sinclair Inlet creeks was based solely on dry season samples. Concentrations of filtered total mercury in creek samples collected during wet weather were significantly higher than dry weather concentrations, which increased the estimated loading of filtered total mercury from creek basins from 27.1 to 78.1 grams per year. Changes in the concentrations and loading of filtered and particulate total mercury in the effluent of the steam plant were investigated after the water softening process was changed from ion-exchange to reverse osmosis and the discharge of stack blow-down wash began to be diverted to the municipal water-treatment plant. These changes reduced the concentrations of filtered and particulate total mercury from the steam plant of the Bremerton naval complex, which resulted in reduced loadings of filtered total mercury from 5.9 to 0.15 grams per year. Previous investigations identified three fill areas on the Bremerton naval complex, of which the western fill area is thought to be the largest source of mercury on the base
Estimated Intakes and Sources of Total and Added Sugars in the Canadian Diet
Brisbois, Tristin D.; Marsden, Sandra L.; Anderson, G. Harvey; Sievenpiper, John L.
2014-01-01
National food supply data and dietary surveys are essential to estimate nutrient intakes and monitor trends, yet there are few published studies estimating added sugars consumption. The purpose of this report was to estimate and trend added sugars intakes and their contribution to total energy intake among Canadians by, first, using Canadian Community Health Survey (CCHS) nutrition survey data of intakes of sugars in foods and beverages, and second, using Statistics Canada availability data a...
Directory of Open Access Journals (Sweden)
Roberta Ariboni Brandi
2011-03-01
Full Text Available The objective of this study to evaluate various indicators to estimate the total nutrient digestibility in horses. We used four adult mares, breed, grouped in a 4 x 4 Latin square balanced fed diets containing equal parts of hay Tifton 85 (Cynodon sp and concentrated experimental containing corn subjected to four processes: a diet containing ground corn ; flaked corn diet 2, 3 rolled corn, and 4 extruded corn. The weighting coefficient of digestibility of nutrients by the indicators was done through the bias. The accuracy and precision were determined by comparing the predicted and observed data, and the robustness of the biases by comparing with other factors studied. The chromic oxide methods showed similar values of apparent digestibility of nutrients when compared to the total collection method. We observed higher accuracy for the acid detergent lignin as compared to the other indicators tested. However, the acid detergent lignin underestimated the digestibility of nutrients when compared to the total collection. The acid detergent insoluble ash overestimated the digestibility of nutrients when compared to the total collection. The chromic oxide is presented as a better indicator for estimating the total apparent digestibility in horses due to its higher accuracy among the markers evaluated.Objetivou-se neste estudo avaliar diferentes indicadores para estimativa das digestibilidades aparente total em equinos. Foram utilizadas quatro éguas adultas, sem raça definida, agrupadas em um quadrado latino 4 x 4 balanceado, alimentadas com dietas que continham partes iguais de feno da gramínea Tifton 85 (Cynodon sp e concentrado experimental que continha milho submetido a quatro processamentos: dieta um milho triturado; dieta dois milho floculado; dieta três milho laminado e dieta quatro milho extrusado. A ponderação dos coeficientes de digestibilidade dos nutrientes pelos indicadores foi efetuada por meio do viés. A acurácia e a precis
Analytical Method to Estimate the Complex Permittivity of Oil Samples
Directory of Open Access Journals (Sweden)
Lijuan Su
2018-03-01
Full Text Available In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR, which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.
Estimated Intakes and Sources of Total and Added Sugars in the Canadian Diet
Directory of Open Access Journals (Sweden)
Tristin D. Brisbois
2014-05-01
Full Text Available National food supply data and dietary surveys are essential to estimate nutrient intakes and monitor trends, yet there are few published studies estimating added sugars consumption. The purpose of this report was to estimate and trend added sugars intakes and their contribution to total energy intake among Canadians by, first, using Canadian Community Health Survey (CCHS nutrition survey data of intakes of sugars in foods and beverages, and second, using Statistics Canada availability data and adjusting these for wastage to estimate intakes. Added sugars intakes were estimated from CCHS data by categorizing the sugars content of food groups as either added or naturally occurring. Added sugars accounted for approximately half of total sugars consumed. Annual availability data were obtained from Statistics Canada CANSIM database. Estimates for added sugars were obtained by summing the availability of “sugars and syrups” with availability of “soft drinks” (proxy for high fructose corn syrup and adjusting for waste. Analysis of both survey and availability data suggests that added sugars average 11%–13% of total energy intake. Availability data indicate that added sugars intakes have been stable or modestly declining as a percent of total energy over the past three decades. Although these are best estimates based on available data, this analysis may encourage the development of better databases to help inform public policy recommendations.
Estimated intakes and sources of total and added sugars in the Canadian diet.
Brisbois, Tristin D; Marsden, Sandra L; Anderson, G Harvey; Sievenpiper, John L
2014-05-08
National food supply data and dietary surveys are essential to estimate nutrient intakes and monitor trends, yet there are few published studies estimating added sugars consumption. The purpose of this report was to estimate and trend added sugars intakes and their contribution to total energy intake among Canadians by, first, using Canadian Community Health Survey (CCHS) nutrition survey data of intakes of sugars in foods and beverages, and second, using Statistics Canada availability data and adjusting these for wastage to estimate intakes. Added sugars intakes were estimated from CCHS data by categorizing the sugars content of food groups as either added or naturally occurring. Added sugars accounted for approximately half of total sugars consumed. Annual availability data were obtained from Statistics Canada CANSIM database. Estimates for added sugars were obtained by summing the availability of "sugars and syrups" with availability of "soft drinks" (proxy for high fructose corn syrup) and adjusting for waste. Analysis of both survey and availability data suggests that added sugars average 11%-13% of total energy intake. Availability data indicate that added sugars intakes have been stable or modestly declining as a percent of total energy over the past three decades. Although these are best estimates based on available data, this analysis may encourage the development of better databases to help inform public policy recommendations.
Zinc estimates in ore and slag samples and analysis of ash in coal samples
International Nuclear Information System (INIS)
Umamaheswara Rao, K.; Narayana, D.G.S.; Subrahmanyam, Y.
1984-01-01
Zinc estimates in ore and slag samples were made using the radioisotope X-ray fluorescence method. A 10 mCi 238 Pu was employed as the primary source of radiation and a thin crystal NaI(Ti) spectrometer was used to accomplish the detection of the 8.64 keV Zinc K-characteristic X-ray line. The results are reported. Ash content of coal concerning about 100 samples from Ravindra Khani VI and VII mines in Andhra Pradesh were measured using X-ray backscattering method with compensation for varying concentrations of iron in different coal samples through iron-X-ray fluorescent intensity measurements. The ash percent is found to range from 10 to 40. (author)
Estimation of plant sampling uncertainty: an example based on chemical analysis of moss samples.
Dołęgowska, Sabina
2016-11-01
In order to estimate the level of uncertainty arising from sampling, 54 samples (primary and duplicate) of the moss species Pleurozium schreberi (Brid.) Mitt. were collected within three forested areas (Wierna Rzeka, Piaski, Posłowice Range) in the Holy Cross Mountains (south-central Poland). During the fieldwork, each primary sample composed of 8 to 10 increments (subsamples) was taken over an area of 10 m 2 whereas duplicate samples were collected in the same way at a distance of 1-2 m. Subsequently, all samples were triple rinsed with deionized water, dried, milled, and digested (8 mL HNO 3 (1:1) + 1 mL 30 % H 2 O 2 ) in a closed microwave system Multiwave 3000. The prepared solutions were analyzed twice for Cu, Fe, Mn, and Zn using FAAS and GFAAS techniques. All datasets were checked for normality and for normally distributed elements (Cu from Piaski, Zn from Posłowice, Fe, Zn from Wierna Rzeka). The sampling uncertainty was computed with (i) classical ANOVA, (ii) classical RANOVA, (iii) modified RANOVA, and (iv) range statistics. For the remaining elements, the sampling uncertainty was calculated with traditional and/or modified RANOVA (if the amount of outliers did not exceed 10 %) or classical ANOVA after Box-Cox transformation (if the amount of outliers exceeded 10 %). The highest concentrations of all elements were found in moss samples from Piaski, whereas the sampling uncertainty calculated with different statistical methods ranged from 4.1 to 22 %.
Lepot, Mathieu; Aubin, Jean-Baptiste; Bertrand-Krajewski, Jean-Luc
2013-01-01
Many field investigations have used continuous sensors (turbidimeters and/or ultraviolet (UV)-visible spectrophotometers) to estimate with a short time step pollutant concentrations in sewer systems. Few, if any, publications compare the performance of various sensors for the same set of samples. Different surrogate sensors (turbidity sensors, UV-visible spectrophotometer, pH meter, conductivity meter and microwave sensor) were tested to link concentrations of total suspended solids (TSS), total and dissolved chemical oxygen demand (COD), and sensors' outputs. In the combined sewer at the inlet of a wastewater treatment plant, 94 samples were collected during dry weather, 44 samples were collected during wet weather, and 165 samples were collected under both dry and wet weather conditions. From these samples, triplicate standard laboratory analyses were performed and corresponding sensors outputs were recorded. Two outlier detection methods were developed, based, respectively, on the Mahalanobis and Euclidean distances. Several hundred regression models were tested, and the best ones (according to the root mean square error criterion) are presented in order of decreasing performance. No sensor appears as the best one for all three investigated pollutants.
DEFF Research Database (Denmark)
Brüel, Annemarie; Nyengaard, Jens Randel
2005-01-01
in LM sections using design-based stereology. MATERIALS AND METHODS: From formalin-fixed left rat ventricles (LV) isotropic uniformly random sections were cut. The total number of myocyte nuclei per LV was estimated using the optical disector. Two-microm-thick serial paraffin sections were stained......BACKGROUND: Counting the total number of cardiac myocytes has not previously been possible in ordinary histological sections using light microscopy (LM) due to difficulties in defining the myocyte borders properly. AIM: To describe a method by which the total number of cardiac myocytes is estimated...... with antibodies against cadherin and type IV collagen to visualise the intercalated discs and the myocyte membranes, respectively. Using the physical disector in "local vertical windows" of the serial sections, the average number of nuclei per myocyte was estimated.RESULTS: The total number of myocyte nuclei...
Spatially explicit population estimates for black bears based on cluster sampling
Humm, J.; McCown, J. Walter; Scheick, B.K.; Clark, Joseph D.
2017-01-01
We estimated abundance and density of the 5 major black bear (Ursus americanus) subpopulations (i.e., Eglin, Apalachicola, Osceola, Ocala-St. Johns, Big Cypress) in Florida, USA with spatially explicit capture-mark-recapture (SCR) by extracting DNA from hair samples collected at barbed-wire hair sampling sites. We employed a clustered sampling configuration with sampling sites arranged in 3 × 3 clusters spaced 2 km apart within each cluster and cluster centers spaced 16 km apart (center to center). We surveyed all 5 subpopulations encompassing 38,960 km2 during 2014 and 2015. Several landscape variables, most associated with forest cover, helped refine density estimates for the 5 subpopulations we sampled. Detection probabilities were affected by site-specific behavioral responses coupled with individual capture heterogeneity associated with sex. Model-averaged bear population estimates ranged from 120 (95% CI = 59–276) bears or a mean 0.025 bears/km2 (95% CI = 0.011–0.44) for the Eglin subpopulation to 1,198 bears (95% CI = 949–1,537) or 0.127 bears/km2 (95% CI = 0.101–0.163) for the Ocala-St. Johns subpopulation. The total population estimate for our 5 study areas was 3,916 bears (95% CI = 2,914–5,451). The clustered sampling method coupled with information on land cover was efficient and allowed us to estimate abundance across extensive areas that would not have been possible otherwise. Clustered sampling combined with spatially explicit capture-recapture methods has the potential to provide rigorous population estimates for a wide array of species that are extensive and heterogeneous in their distribution.
Energy Technology Data Exchange (ETDEWEB)
Mandjukov, Petko; Orani, Anna Maria; Han, Eunmi; Vassileva, Emilia, E-mail: e.vasileva-veleva@iaea.org
2015-01-01
The most critical step in almost all commonly used analytical procedures for Hg determination is the sample preparation due to its extreme volatility. One of the possible solutions of this problem is the application of methods for direct analysis of solid samples. The possibilities for solid sampling high resolution continuum source atomic absorption spectrometry (HR CS AAS) determination of total mercury in various marine environmental samples e.g. sediments and biota are object of the present study. The instrumental parameters were optimized in order to obtain reproducible and interference free analytical signal. A calibration technique based on the use of solid standard certified reference materials similar to the nature of the analyzed sample was developed and applied to various CRMs and real samples. This technique allows simple and reliable evaluation of the uncertainty of the result and the metrological characteristics of the method. A validation approach in line with the requirements of ISO 17025 standard and Eurachem guidelines was followed. With this in mind, selectivity, working range (0.06 to 25 ng for biota and 0.025 to 4 ng for sediment samples, expressed as total Hg) linearity (confirmed by Student's t-test), bias (1.6–4.3%), repeatability (4–9%), reproducibility (9–11%), and absolute limit of detection (0.025 ng for sediment, 0.096 ng for marine biota) were systematically assessed using solid CRMs. The relative expanded uncertainty was estimated at 15% for sediment sample and 8.5% for marine biota sample (k = 2). Demonstration of traceability of measurement results is also presented. The potential of the proposed analytical procedure, based on solid sampling HR CS AAS technique was demonstrated by direct analysis of sea sediments form the Caribbean region and various CRMs. Overall, the use of solid sampling HR CS AAS permits obtaining significant advantages for the determination of this complex analyte in marine samples, such as
Should total landings be used to correct estimated catch in numbers or mean-weight-at-age?
DEFF Research Database (Denmark)
Lewy, Peter; Lassen, H.
1997-01-01
Many ICES fish stock assessment working groups have practised Sum Of Products, SOP, correction. This correction stems from a comparison of total weights of the known landings and the SOP over age of catch in number and mean weight-at-age, which ideally should be identical. In case of SOP...... discrepancies some countries correct catch in numbers while others correct mean weight-at-age by a common factor, the ratio between landing and SOP. The paper shows that for three sampling schemes the SOP corrections are statistically incorrect and should not be made since the SOP is an unbiased estimate...... of the total landings. Calculation of the bias of estimated catch in numbers and mean weight-at-age shows that SOP corrections of either of these estimates may increase the bias. Furthermore, for five demersal and one pelagic North Sea species it is shown that SOP discrepancies greater than 2% from...
Estimation of radioactivity in some sand and soil samples
International Nuclear Information System (INIS)
Gupta, Monika; Chauhan, R.P.; Garg, Ajay; Kumar, Sushil; Sonkawade, R.G.
2010-01-01
Natural radioactivity is composed of the cosmogenic and primordial radionuclides. It is common in the rocks and soil that make up our planet, water and oceans, and in our building materials and homes. Natural radioactivity in sand and soils comes from 238 U and 232 Th series and natural 40 K. Radon is formed from the decay of radium which in turn is formed from uranium. The gaseous radioactive isotope of radon from natural sources has a significant share in the total quantum of natural sources exposure to the human bwings. Gamma radiation from 238 U, 232 Th and 40 K represents the main external source of irradiation of the human body. In the present study, the activity for 238 U, 232 Th and 40 K is found to vary from 45 ± 1.2 to 97 ± 4.9 Bq/kg, 63 ± 2.0 to 132 ± 3.2 Bq/kg and 492 ± 5.9 to 1110 ± 10.5 Bq/kg, respectively in the soil samples while the variations have been observed from 63 ± 3.8 to 65 ± 3.7 Bq/kg, 86 ±2.5 to 96 ± 2.6 Bq/kg and 751 ± 7.7 to 824 ± 8.2 Bq/kg, respectively in the sand samples. (author)
Method validation to determine total alpha beta emitters in water samples using LSC
International Nuclear Information System (INIS)
Al-Masri, M. S.; Nashawati, A.; Al-akel, B.; Saaid, S.
2006-06-01
In this work a method was validated to determine gross alpha and beta emitters in water samples using liquid scintillation counter. 200 ml of water from each sample were evaporated to 20 ml and 8 ml of them were mixed with 12 ml of the suitable cocktail to be measured by liquid scintillation counter Wallac Winspectral 1414. The lower detection limit by this method (LDL) was 0.33 DPM for total alpha emitters and 1.3 DPM for total beta emitters. and the reproducibility limit was (± 2.32 DPM) and (±1.41 DPM) for total alpha and beta emitters respectively, and the repeatability limit was (±2.19 DPM) and (±1.11 DPM) for total alpha and beta emitters respectively. The method is easy and fast because of the simple preparation steps and the large number of samples that can be measured at the same time. In addition, many real samples and standard samples were analyzed by the method and showed accurate results so it was concluded that the method can be used with various water samples. (author)
Total Arsenic, Cadmium, and Lead Determination in Brazilian Rice Samples Using ICP-MS.
Mataveli, Lidiane Raquel Verola; Buzzo, Márcia Liane; de Arauz, Luciana Juncioni; Carvalho, Maria de Fátima Henriques; Arakaki, Edna Emy Kumagai; Matsuzaki, Richard; Tiglea, Paulo
2016-01-01
This study is aimed at investigating a suitable method for rice sample preparation as well as validating and applying the method for monitoring the concentration of total arsenic, cadmium, and lead in rice by using Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Various rice sample preparation procedures were evaluated. The analytical method was validated by measuring several parameters including limit of detection (LOD), limit of quantification (LOQ), linearity, relative bias, and repeatability. Regarding the sample preparation, recoveries of spiked samples were within the acceptable range from 89.3 to 98.2% for muffle furnace, 94.2 to 103.3% for heating block, 81.0 to 115.0% for hot plate, and 92.8 to 108.2% for microwave. Validation parameters showed that the method fits for its purpose, being the total arsenic, cadmium, and lead within the Brazilian Legislation limits. The method was applied for analyzing 37 rice samples (including polished, brown, and parboiled), consumed by the Brazilian population. The total arsenic, cadmium, and lead contents were lower than the established legislative values, except for total arsenic in one brown rice sample. This study indicated the need to establish monitoring programs for emphasizing the study on this type of cereal, aiming at promoting the Public Health.
Total Arsenic, Cadmium, and Lead Determination in Brazilian Rice Samples Using ICP-MS
Directory of Open Access Journals (Sweden)
Lidiane Raquel Verola Mataveli
2016-01-01
Full Text Available This study is aimed at investigating a suitable method for rice sample preparation as well as validating and applying the method for monitoring the concentration of total arsenic, cadmium, and lead in rice by using Inductively Coupled Plasma Mass Spectrometry (ICP-MS. Various rice sample preparation procedures were evaluated. The analytical method was validated by measuring several parameters including limit of detection (LOD, limit of quantification (LOQ, linearity, relative bias, and repeatability. Regarding the sample preparation, recoveries of spiked samples were within the acceptable range from 89.3 to 98.2% for muffle furnace, 94.2 to 103.3% for heating block, 81.0 to 115.0% for hot plate, and 92.8 to 108.2% for microwave. Validation parameters showed that the method fits for its purpose, being the total arsenic, cadmium, and lead within the Brazilian Legislation limits. The method was applied for analyzing 37 rice samples (including polished, brown, and parboiled, consumed by the Brazilian population. The total arsenic, cadmium, and lead contents were lower than the established legislative values, except for total arsenic in one brown rice sample. This study indicated the need to establish monitoring programs for emphasizing the study on this type of cereal, aiming at promoting the Public Health.
Directory of Open Access Journals (Sweden)
Shanyou Zhu
2014-01-01
Full Text Available Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
Directory of Open Access Journals (Sweden)
Jiaming Liu
2015-11-01
Full Text Available Measuring total nitrogen (TN and total phosphorus (TP is important in managing heavy polluted urban waters in China. This study uses high spatial resolution IKONOS imagery with four multispectral bands, which roughly correspond to Landsat/TM bands 1–4, to determine TN and TP in small urban rivers and lakes in China. By using Lake Cihu and the lower reaches of Wen-Rui Tang (WRT River as examples, this paper develops both multiple linear regressions (MLR and artificial neural network (ANN models to estimate TN and TP concentrations from high spatial resolution remote sensing imagery and in situ water samples collected concurrently with overpassing satellite. The measured and estimated values of both MLR and ANN models are in good agreement (R2 > 0.85 and RMSE < 2.50. The empirical equations selected by MLR are more straightforward, whereas the estimated accuracy using ANN model is better (R2 > 0.86 and RMSE < 0.89. Results validate the potential of using high resolution IKONOS multispectral imagery to study the chemical states of small-sized urban water bodies. The spatial distribution maps of TN and TP concentrations generated by the ANN model can inform the decision makers of variations in water quality in Lake Cihu and lower reaches of WRT River. The approaches and equations developed in this study could be applied to other urban water bodies for water quality monitoring.
Estimating the total leaf area of the green dwarf coconut tree (Cocos nucifera L.
Directory of Open Access Journals (Sweden)
Sousa Elias Fernandes de
2005-01-01
Full Text Available Leaf area has significant effect on tree transpiration, and its measurement is important to many study areas. This work aimed at developing a non-destructive, practical, and empirical method to estimate the total leaf area of green dwarf coconut palms (Cocos nucifera L. in plantations located at the northern region of Rio de Janeiro state, Brazil. A mathematical model was developed to estimate total leaf area values (TLA as function of the average lengths of the last three leaf raquis (LR3, and of the number of leaves in the canopy (NL. The model has satisfactory degree of accuracy for agricultural engineering purposes.
Brus, D.J.; Gruijter, de J.J.
2003-01-01
In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be
Total Arsenic, Cadmium, and Lead Determination in Brazilian Rice Samples Using ICP-MS
Mataveli, Lidiane Raquel Verola; Buzzo, Márcia Liane; Arauz, Luciana Juncioni de; Carvalho, Maria de Fátima Henriques; Arakaki, Edna Emy Kumagai; Matsuzaki, Richard; Tiglea, Paulo
2016-01-01
This study is aimed at investigating a suitable method for rice sample preparation as well as validating and applying the method for monitoring the concentration of total arsenic, cadmium, and lead in rice by using Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Various rice sample preparation procedures were evaluated. The analytical method was validated by measuring several parameters including limit of detection (LOD), limit of quantification (LOQ), linearity, relative bias, and rep...
Olives, Casey; Valadez, Joseph J; Pagano, Marcello
2014-03-01
To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.
Direct analysis of biological samples by total reflection X-ray fluorescence
International Nuclear Information System (INIS)
Lue M, Marco P.; Hernandez-Caraballo, Edwin A.
2004-01-01
The technique of total reflection X-ray fluorescence (TXRF) is well suited for the direct analysis of biological samples due to the low matrix interferences and simultaneous multi-element nature. Nevertheless, biological organic samples are frequently analysed after digestion procedures. The direct determination of analytes requires shorter analysis time, low reactive consumption and simplifies the whole analysis process. On the other hand, the biological/clinical samples are often available in minimal amounts and routine studies require the analysis of large number of samples. To overcome the difficulties associated with the analysis of organic samples, particularly of solid ones, different procedures of sample preparation and calibration to approach the direct analysis have been evaluated: (1) slurry sampling, (2) Compton peak standardization, (3) in situ microwave digestion, (4) in situ chemical modification and (5) direct analysis with internal standardization. Examples of analytical methods developed by our research group are discussed. Some of them have not been previously published, illustrating alternative strategies for coping with various problems that may be encountered in the direct analysis by total reflection X-ray fluorescence spectrometry
First Total Reflection X-Ray Fluorescence round-robin test of water samples: Preliminary results
Energy Technology Data Exchange (ETDEWEB)
Borgese, Laura; Bilo, Fabjola [Chemistry for Technologies Laboratory, University of Brescia, Brescia (Italy); Tsuji, Kouichi [Graduate School of Engineering, Osaka City University, Osaka (Japan); Fernández-Ruiz, Ramón [Servicio Interdepartamental de Investigación (SIdI), Laboratorio de TXRF, Universidad Autónoma de Madrid, Madrid (Spain); Margui, Eva [Department of Chemistry, University of Girona, Girona (Spain); Streli, Christina [TU Wien, Atominstitut,Radiation Physics, Vienna (Austria); Pepponi, Giancarlo [Fondazione Bruno Kessler, Povo, Trento (Italy); Stosnach, Hagen [Bruker Nano GmbH, Berlin (Germany); Yamada, Takashi [Rigaku Corporation, Takatsuki, Osaka (Japan); Vandenabeele, Peter [Department of Archaeology, Ghent University, Ghent (Belgium); Maina, David M.; Gatari, Michael [Institute of Nuclear Science and Technology, University of Nairobi, Nairobi (Kenya); Shepherd, Keith D.; Towett, Erick K. [World Agroforestry Centre (ICRAF), Nairobi (Kenya); Bennun, Leonardo [Laboratorio de Física Aplicada, Departamento de Física, Universidad de Concepción (Chile); Custo, Graciela; Vasquez, Cristina [Gerencia Química, Laboratorio B025, Centro Atómico Constituyentes, San Martín (Argentina); Depero, Laura E., E-mail: laura.depero@unibs.it [Chemistry for Technologies Laboratory, University of Brescia, Brescia (Italy)
2014-11-01
Total Reflection X-Ray Fluorescence (TXRF) is a mature technique to evaluate quantitatively the elemental composition of liquid samples deposited on clean and well polished reflectors. In this paper the results of the first worldwide TXRF round-robin test of water samples, involving 18 laboratories in 10 countries are presented and discussed. The test was performed within the framework of the VAMAS project, interlaboratory comparison of TXRF spectroscopy for environmental analysis, whose aim is to develop guidelines and a standard methodology for biological and environmental analysis by means of the TXRF analytical technique. - Highlights: • The discussion of the first worldwide TXRF round-robin test of water samples (18 laboratories of 10 countries) is reported. • Drinking, waste, and desalinated water samples were tested. • Data dispersion sources were identified: sample concentration, preparation, fitting procedure, and quantification. • The protocol for TXRF analysis of drinking water is proposed.
Total CMB analysis of streaker aerosol samples by PIXE, PIGE, beta- and optical-absorption analyses
International Nuclear Information System (INIS)
Annegarn, H.J.; Przybylowicz, W.J.
1993-01-01
Multielemental analyses of aerosol samples are widely used in air pollution receptor modelling. Specifically, the chemical mass balance (CMB) model has become a powerful tool in urban air quality studies. Input data required for the CMB includes not only the traditional X-ray fluorescence (and hence PIXE) detected elements, but also total mass, organic and inorganic carbon, and other light elements including Mg, Na and F. The circular streaker sampler, in combination with PIXE analysis, has developed into a powerful tool for obtaining time-resolved, multielemental aerosol data. However, application in CMB modelling has been limited by the absence of total mass and complementary light element data. This study reports on progress in using techniques complementary to PIXE to obtain additional data from circular streaker samples, maintaining the nondestructive, instrumental approach inherent in PIXE: Beta-gauging using a 147 Pm source for total mass; optical absorption for inorganic carbon; and PIGE to measure the lighter elements. (orig.)
Application of the control variate technique to estimation of total sensitivity indices
International Nuclear Information System (INIS)
Kucherenko, S.; Delpuech, B.; Iooss, B.; Tarantola, S.
2015-01-01
Global sensitivity analysis is widely used in many areas of science, biology, sociology and policy planning. The variance-based methods also known as Sobol' sensitivity indices has become the method of choice among practitioners due to its efficiency and ease of interpretation. For complex practical problems, estimation of Sobol' sensitivity indices generally requires a large number of function evaluations to achieve reasonable convergence. To improve the efficiency of the Monte Carlo estimates for the Sobol' total sensitivity indices we apply the control variate reduction technique and develop a new formula for evaluation of total sensitivity indices. Presented results using well known test functions show the efficiency of the developed technique. - Highlights: • We analyse the efficiency of the Monte Carlo estimates of Sobol' sensitivity indices. • The control variate technique is applied for estimation of total sensitivity indices. • We develop a new formula for evaluation of Sobol' total sensitivity indices. • We present test results demonstrating the high efficiency of the developed formula
Estimating total evaporation at the field scale using the SEBS model ...
African Journals Online (AJOL)
Estimating total evaporation at the field scale using the SEBS model and data infilling ... of two infilling techniques to create a daily satellite-derived ET time series. ... and produced R2 and RMSE values of 0.33 and 2.19 mm∙d-1, respectively, ...
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Total reflection x-ray analysis of metals in blood samples
International Nuclear Information System (INIS)
Nakamura, Takuya; Matsui, Hiroshi; Kawamata, Masaya
2009-01-01
The sample preparation for TXRF (total reflection X-ray fluorescence) quantitative analysis of trace elements in human blood samples was investigated. In the TXRF analysis, a solution sample is dropped and dried on a flat substrate, and then the dried residue is measured. In this case, the dried residue should be flat not to disturb X-ray total reflection on the substrate. In addition, it is required to simply measure the whole blood sample by TXRF method, although a serum is analyzed in many cases. Thus, we studied the optimum conditions of the sample preparation of the whole blood by adding the pure water to apply Hemolysis phenomenon, where blood cells are destroyed due to different of the osmotic pressure, leading to flat residue. It was found that the best S/B ratio was obtained when the whole blood was diluted 8 times with pure water. Moreover, it was investigated the influence of the surface chemical condition of the glass substrate on the shape of the dried reside of the blood sample. When the surface of the glass substrate was hydrophilic, the shape of the dried residues was not uniform, as a result, the quantitative data of TXRF analysis gave a large deviation. On the other hand, when the surface of the glass was hydrophobic, the shape of the residue was almost uniform, as a result, a good reproducibility was obtained. Another problem was an outer ring of the dried residue of the blood. This uneven ring absorbs the primary X-rays, caused to low determined quantitative data. Thus, we tried the heating way of the dropped blood sample at a high temperature of 200 degrees. In this case, the blood sample was dried immediately, and a flat homogeneous dried residue was obtained without the outer ring. Using the optimized conditions for sample preparation, human blood sample was quantitatively measured by TXRF and ICP-AES. A good agreement was obtained in TXRF and ICP-AES determinations; however, the measurement of Cl and Br will be an advantage of TXRF, because
Estimating mean change in population salt intake using spot urine samples.
Petersen, Kristina S; Wu, Jason H Y; Webster, Jacqui; Grimes, Carley; Woodward, Mark; Nowson, Caryl A; Neal, Bruce
2017-10-01
Spot urine samples are easier to collect than 24-h urine samples and have been used with estimating equations to derive the mean daily salt intake of a population. Whether equations using data from spot urine samples can also be used to estimate change in mean daily population salt intake over time is unknown. We compared estimates of change in mean daily population salt intake based upon 24-h urine collections with estimates derived using equations based on spot urine samples. Paired and unpaired 24-h urine samples and spot urine samples were collected from individuals in two Australian populations, in 2011 and 2014. Estimates of change in daily mean population salt intake between 2011 and 2014 were obtained directly from the 24-h urine samples and by applying established estimating equations (Kawasaki, Tanaka, Mage, Toft, INTERSALT) to the data from spot urine samples. Differences between 2011 and 2014 were calculated using mixed models. A total of 1000 participants provided a 24-h urine sample and a spot urine sample in 2011, and 1012 did so in 2014 (paired samples n = 870; unpaired samples n = 1142). The participants were community-dwelling individuals living in the State of Victoria or the town of Lithgow in the State of New South Wales, Australia, with a mean age of 55 years in 2011. The mean (95% confidence interval) difference in population salt intake between 2011 and 2014 determined from the 24-h urine samples was -0.48g/day (-0.74 to -0.21; P spot urine samples was -0.24 g/day (-0.42 to -0.06; P = 0.01) using the Tanaka equation, -0.42 g/day (-0.70 to -0.13; p = 0.004) using the Kawasaki equation, -0.51 g/day (-1.00 to -0.01; P = 0.046) using the Mage equation, -0.26 g/day (-0.42 to -0.10; P = 0.001) using the Toft equation, -0.20 g/day (-0.32 to -0.09; P = 0.001) using the INTERSALT equation and -0.27 g/day (-0.39 to -0.15; P 0.058). Separate analysis of the unpaired and paired data showed that detection of
Brus, D.J.; Saby, N.P.A.
2016-01-01
In France like in many other countries, the soil is monitored at the locations of a regular, square grid thus forming a systematic sample (SY). This sampling design leads to good spatial coverage, enhancing the precision of design-based estimates of spatial means and totals. Design-based
Estimating time-based instantaneous total mortality rate based on the age-structured abundance index
Wang, Yingbin; Jiao, Yan
2015-05-01
The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.
International Nuclear Information System (INIS)
DOUGLAS JG; MEZNARICH HD, PHD; OLSEN JR; ROSS GA; STAUFFER M
2008-01-01
Total organic halogen (TOX) is used as a parameter to screen groundwater samples at the Hanford Site. Trending is done for each groundwater well, and changes in TOX and other screening parameters can lead to costly changes in the monitoring protocol. The Waste Sampling and Characterization Facility (WSCF) analyzes groundwater samples for TOX using the United States Environmental Protection Agency (EPA) SW-846 method 9020B (EPA 1996a). Samples from the Soil and Groundwater Remediation Project (S and GRP) are submitted to the WSCF for analysis without information regarding the source of the sample; each sample is in essence a 'blind' sample to the laboratory. Feedback from the S and GRP indicated that some of the WSCF-generated TOX data from groundwater wells had a number of outlier values based on the historical trends (Anastos 2008a). Additionally, analysts at WSCF observed inconsistent TOX results among field sample replicates. Therefore, the WSCF lab performed an investigation of the TOX analysis to determine the cause of the outlier data points. Two causes were found that contributed to generating out-of-trend TOX data: (1) The presence of inorganic chloride in the groundwater samples: at inorganic chloride concentrations greater than about 10 parts per million (ppm), apparent TOX values increase with increasing chloride concentration. A parallel observation is the increase in apparent breakthrough of TOX from the first to the second activated-carbon adsorption tubes with increasing inorganic chloride concentration. (2) During the sample preparation step, excessive purging of the adsorption tubes with oxygen pressurization gas after sample loading may cause channeling in the activated-carbon bed. This channeling leads to poor removal of inorganic chloride during the subsequent wash step with aqueous potassium nitrate. The presence of this residual inorganic chloride then produces erroneously high TOX values. Changes in sample preparation were studied to more
International Nuclear Information System (INIS)
Dhara, Sangita; Misra, N.L.; Maind, S.D.; Kumar, Sanjukta A.; Chattopadhyay, N.; Aggarwal, S.K.
2010-01-01
The possibility of applying Total Reflection X-ray Fluorescence for qualitative and quantitative differentiation of documents printed with rare earth tagged and untagged inks has been explored in this paper. For qualitative differentiation, a very small amount of ink was loosened from the printed documents by smoothly rubbing with a new clean blade without destroying the manuscript. 50 μL of Milli-Q water was put on this loose powder, on the manuscript, and was agitated by sucking and releasing the suspension two to three times with the help of a micropipette. The resultant dispersion was deposited on quartz sample support for Total Reflection X-ray Fluorescence measurements. The Total Reflection X-ray Fluorescence spectrum of tagged and untagged inks could be clearly differentiated. In order to see the applicability of Total Reflection X-ray Fluorescence for quantitative determinations of rare earths and also to countercheck such determinations in ink samples, the amounts of rare earth in painted papers with single rare earth tagged inks were determined by digesting the painted paper in HNO 3 /HClO 4 , mixing this solution with the internal standard and recording their Total Reflection X-ray Fluorescence spectra after calibration of the instrument. The results thus obtained were compared with those obtained by Inductively Coupled Plasma Mass Spectrometry and were found in good agreement. The average precision of the Total Reflection X-ray Fluorescence determinations was 5.5% (1σ) and the average deviation of Total Reflection X-ray Fluorescence determined values with that of Inductively Coupled Plasma Mass Spectrometry was 7.3%. These studies have shown that Total Reflection X-ray Fluorescence offers a promising and potential application in forensic work of this nature.
Energy Technology Data Exchange (ETDEWEB)
Dhara, Sangita [Fuel Chemistry Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India); Misra, N.L., E-mail: nlmisra@barc.gov.i [Fuel Chemistry Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India); Maind, S.D. [NAA Unit of Central Forensic Science Laboratory Hyderabad at Analytical Chemistry Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India); Kumar, Sanjukta A. [Analytical Chemistry Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India); Chattopadhyay, N. [NAA Unit of Central Forensic Science Laboratory Hyderabad at Analytical Chemistry Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India); Aggarwal, S.K. [Fuel Chemistry Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India)
2010-02-15
The possibility of applying Total Reflection X-ray Fluorescence for qualitative and quantitative differentiation of documents printed with rare earth tagged and untagged inks has been explored in this paper. For qualitative differentiation, a very small amount of ink was loosened from the printed documents by smoothly rubbing with a new clean blade without destroying the manuscript. 50 muL of Milli-Q water was put on this loose powder, on the manuscript, and was agitated by sucking and releasing the suspension two to three times with the help of a micropipette. The resultant dispersion was deposited on quartz sample support for Total Reflection X-ray Fluorescence measurements. The Total Reflection X-ray Fluorescence spectrum of tagged and untagged inks could be clearly differentiated. In order to see the applicability of Total Reflection X-ray Fluorescence for quantitative determinations of rare earths and also to countercheck such determinations in ink samples, the amounts of rare earth in painted papers with single rare earth tagged inks were determined by digesting the painted paper in HNO{sub 3}/HClO{sub 4}, mixing this solution with the internal standard and recording their Total Reflection X-ray Fluorescence spectra after calibration of the instrument. The results thus obtained were compared with those obtained by Inductively Coupled Plasma Mass Spectrometry and were found in good agreement. The average precision of the Total Reflection X-ray Fluorescence determinations was 5.5% (1sigma) and the average deviation of Total Reflection X-ray Fluorescence determined values with that of Inductively Coupled Plasma Mass Spectrometry was 7.3%. These studies have shown that Total Reflection X-ray Fluorescence offers a promising and potential application in forensic work of this nature.
Inverse Gaussian model for small area estimation via Gibbs sampling
African Journals Online (AJOL)
We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...
International Nuclear Information System (INIS)
Eltayeb, M. A. H.; Mohammed, A. A.
2003-01-01
In the present work uranium content and total phosphorus were determined in 30 phosphate ore samples collected from Kurun and Uro areas in Nuba Mountains in Sudan. Spectrophotometry technique was used for this purpose. Uranium analysis is based on the use of nitrogen (V) acid for leaching the rock, and treatment with ammonium carbonate solution, whereby uranium (Vi) is kept in solution as its carbonate complex. The ion exchange technique was used for the recovery of uranium. Uranium was eluted from the resin with 1 M hydrochloric acid. In the elute, uranium was determined spectrophotometrically by measurement of absorbance of the yellow uranium (Vi)-8-hydroxyquinolate complex at λ 400 nm. The total phosphorus was measured as (P 2 O 5 %) by treatment of the total liquor with ammonium molybdate solution. The absorbance of the blue complex was measured at λ 880 nm. The results show that a limited relation is existed between uranium content and total phosphorus in phosphate samples from kurun area, which contain 58.8 ppm uranium in average, where there are no relation is existed in phosphate samples from uro area, which contain 200 ppm uranium in average. (Author)
International Nuclear Information System (INIS)
Mohammed, A.A.; Eltayeb, M.A.H.
2003-01-01
In the present work uranium content and total phosphorous were determined in 30 phosphate ore samples collected from Kurun and Uro areas in Nuba mountains in sudan. Spectrophotometry technique was used for this purpose. Uranium analysis is based on the use of nitrogen (V) acid for leaching the rock, and treatment with ammonium carbonate solution, whereby uranium (VI) is kept in solution as its carbonate complex. The ion-exchange technique was used for the recovery of uranium. Uranium was eluted from the resin with 1 M hydrochloric acid. In the elute, uranium was determined spectrophotometrically by measurement of the absorbance of the yellow uranium (VI) - 8- hydroxyquinolate complex at λ 400 nm. The total phosphorus was measured as (P 2 O 5 %) by treatment of the leach liquor with ammonium molybdate solution. The absorbance of the blue complex was measured at λ 880 nm. The results show that a limited relation is existed between uranium content and total phosphorus in phosphate samples from Kurun area, which contain 58.8 ppm uranium in average, where there are no relation is existed in phosphate samples samples from Uro area, which contain 200 ppm uranium in average
Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.
Algina, James; Olejnik, Stephen
2000-01-01
Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)
Stanley, F. E.; Byerly, Benjamin L.; Thomas, Mariam R.; Spencer, Khalil J.
2016-06-01
Actinide isotope measurements are a critical signature capability in the modern nuclear forensics "toolbox", especially when interrogating anthropogenic constituents in real-world scenarios. Unfortunately, established methodologies, such as traditional total evaporation via thermal ionization mass spectrometry, struggle to confidently measure low abundance isotope ratios (evaporation techniques as a straightforward means of improving plutonium minor isotope measurements, which have been resistant to enhancement in recent years because of elevated radiologic concerns. Results are presented for small sample (~20 ng) applications involving a well-known plutonium isotope reference material, CRM-126a, and compared with traditional total evaporation methods.
Plaisance, L.; Knowlton, N.; Paulay, G.; Meyer, C.
2009-12-01
The cryptofauna associated with coral reefs accounts for a major part of the biodiversity in these ecosystems but has been largely overlooked in biodiversity estimates because the organisms are hard to collect and identify. We combine a semi-quantitative sampling design and a DNA barcoding approach to provide metrics for the diversity of reef-associated crustacean. Twenty-two similar-sized dead heads of Pocillopora were sampled at 10 m depth from five central Pacific Ocean localities (four atolls in the Northern Line Islands and in Moorea, French Polynesia). All crustaceans were removed, and partial cytochrome oxidase subunit I was sequenced from 403 individuals, yielding 135 distinct taxa using a species-level criterion of 5% similarity. Most crustacean species were rare; 44% of the OTUs were represented by a single individual, and an additional 33% were represented by several specimens found only in one of the five localities. The Northern Line Islands and Moorea shared only 11 OTUs. Total numbers estimated by species richness statistics (Chao1 and ACE) suggest at least 90 species of crustaceans in Moorea and 150 in the Northern Line Islands for this habitat type. However, rarefaction curves for each region failed to approach an asymptote, and Chao1 and ACE estimators did not stabilize after sampling eight heads in Moorea, so even these diversity figures are underestimates. Nevertheless, even this modest sampling effort from a very limited habitat resulted in surprisingly high species numbers.
Stereological estimation of total cell numbers in the young human utricular macula
DEFF Research Database (Denmark)
Severinsen, Stig Avall; Sørensen, Mads Sølvsten; Kirkegaard, Mette
2010-01-01
Abstract Conclusion: There is no change in the total cell population and hair cell:supporting cell ratio in the human utricular macula from gestational week 16 and onwards, whereas the lower hair cell:supporting cell ratio and lower total number of cells in the youngest specimens indicate...... that the utricle is still differentiating and adding new cells at the 10th to 12th gestational week. Objectives: Archival temporal bones were investigated to quantify cell numbers in the utricular macula in fetuses and children. Methods: The age of the subjects ranged from gestational week 10 to 15 years....... The optical fractionator was used to estimate the total number of cells in the utricular macula. Results: The total cell number was found to be 143 000 in subjects older than gestational week 16. The number of hair cells and supporting cells did not change between the 16th gestational week and 15 years...
Estimation of the total absorbed dose by quartz in retrospective conditions
International Nuclear Information System (INIS)
Correcher, V.; Delgado, A.
2003-01-01
The estimation of the total absorbed dose is of great interest in areas affected by a radiological accident when no conventional dosimetric systems are available. This paper reports about the usual methodology employed in dose reconstruction from the thermoluminescence (TL) properties of natural quartz, extracted from selected ceramic materials (12 bricks) picked up in the Chernobyl area. It has been possible to evaluate doses under 50mGy after more than 11 years later since the radiological accident happened. The main advance of this fact is the reduction of the commonly accepted limit dose estimation more than 20 times employing luminescence methods. (Author) 11 refs
International Nuclear Information System (INIS)
Gartrell, M.J.; Craun, J.C.; Podrebarac, D.S.; Gunderson, E.L.
1985-01-01
The US Food and Drug Administration (FDA) conducts Total Diet Studies to determine the dietary intake of selected pesticides, industrial chemicals, and elements (including radionuclides). These studies involve the retail purchase and analysis of foods representative of the diets of infants, toddlers, and adults. The individual food items are separated into a number of food groups, each of which is analyzed as a composite. This report summarizes the results for adult Total Diet samples collected in 20 cities between October 1979 and September 1980. The average concentration, range of concentrations, and calculated average daily intake of each chemical found are presented by food group. The average daily intakes of the chemicals are similar to those found in the several preceding years and are within acceptable limits. The results for samples collected during the same period that represent the diets of infants and toddlers are reported separately
Mandjukov, Petko; Orani, Anna Maria; Han, Eunmi; Vassileva, Emilia
2015-01-01
The most critical step in almost all commonly used analytical procedures for Hg determination is the sample preparation due to its extreme volatility. One of the possible solutions of this problem is the application of methods for direct analysis of solid samples. The possibilities for solid sampling high resolution continuum source atomic absorption spectrometry (HR CS AAS) determination of total mercury in various marine environmental samples e.g. sediments and biota are object of the present study. The instrumental parameters were optimized in order to obtain reproducible and interference free analytical signal. A calibration technique based on the use of solid standard certified reference materials similar to the nature of the analyzed sample was developed and applied to various CRMs and real samples. This technique allows simple and reliable evaluation of the uncertainty of the result and the metrological characteristics of the method. A validation approach in line with the requirements of ISO 17025 standard and Eurachem guidelines was followed. With this in mind, selectivity, working range (0.06 to 25 ng for biota and 0.025 to 4 ng for sediment samples, expressed as total Hg) linearity (confirmed by Student's t-test), bias (1.6-4.3%), repeatability (4-9%), reproducibility (9-11%), and absolute limit of detection (0.025 ng for sediment, 0.096 ng for marine biota) were systematically assessed using solid CRMs. The relative expanded uncertainty was estimated at 15% for sediment sample and 8.5% for marine biota sample (k = 2). Demonstration of traceability of measurement results is also presented. The potential of the proposed analytical procedure, based on solid sampling HR CS AAS technique was demonstrated by direct analysis of sea sediments form the Caribbean region and various CRMs. Overall, the use of solid sampling HR CS AAS permits obtaining significant advantages for the determination of this complex analyte in marine samples, such as straightforward
Quality control on the accuracy of the total Beta activity index in different sample matrices water
International Nuclear Information System (INIS)
Pujol, L.; Pablo, M. A. de; Payeras, J.
2013-01-01
The standard ISO/IEC 17025:2005 of general requirements for the technical competence of testing and calibration laboratories, provides that a laboratory shall have quality control procedures for monitoring the validity of tests and calibrations ago. In this paper, the experience of Isotopic Applications Laboratory (CEDEX) in controlling the accuracy rate of total beta activity in samples of drinking water, inland waters and marine waters is presented. (Author)
Measurement of total risk of spontaneous abortion: the virtue of conditional risk estimation
DEFF Research Database (Denmark)
Modvig, J; Schmidt, L; Damsgaard, M T
1990-01-01
The concepts, methods, and problems of measuring spontaneous abortion risk are reviewed. The problems touched on include the process of pregnancy verification, the changes in risk by gestational age and maternal age, and the presence of induced abortions. Methods used in studies of spontaneous...... abortion risk include biochemical assays as well as life table technique, although the latter appears in two different forms. The consequences of using either of these are discussed. It is concluded that no study design so far is appropriate for measuring the total risk of spontaneous abortion from early...... conception to the end of the 27th week. It is proposed that pregnancy may be considered to consist of two or three specific periods and that different study designs should concentrate on measuring the conditional risk within each period. A careful estimate using this principle leads to an estimate of total...
Design unbiased estimation in line intersect sampling using segmented transects
David L.R. Affleck; Timothy G. Gregoire; Harry T. Valentine; Harry T. Valentine
2005-01-01
In many applications of line intersect sampling. transects consist of multiple, connected segments in a prescribed configuration. The relationship between the transect configuration and the selection probability of a population element is illustrated and a consistent sampling protocol, applicable to populations composed of arbitrarily shaped elements, is proposed. It...
Turbidity threshold sampling for suspended sediment load estimation
Jack Lewis; Rand Eads
2001-01-01
Abstract - The paper discusses an automated procedure for measuring turbidity and sampling suspended sediment. The basic equipment consists of a programmable data logger, an in situ turbidimeter, a pumping sampler, and a stage-measuring device. The data logger program employs turbidity to govern sample collection during each transport event. Mounting configurations and...
Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)
2000-01-01
Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.
Analysis of biological slurry samples by total x-ray fluorescence after in situ microwave digestion
International Nuclear Information System (INIS)
Lue-Meru, M.P.; Capote, T.; Greaves, E.
2000-01-01
Biological slurry samples were analyzed by total reflection x-ray fluorescence (TXRF) after an in situ microwave digestion procedure on the quartz reflector. This method lead to the removal of the matrix by the digestion and permits the enrichment of the analites by using sample amounts higher than those normally used in TXRF for the thin layer requirement since the organic matrix is removed. In consequence, the pre-concentration of sample is performed and the detection capability is increased by a quasi direct method. The samples analyzed were the international IAEA blood standard, the SRM bovine liver 1577-a standard and fresh onion tissues. Slurries were prepared in three ways: a.- weighing a sample amount on the reflector and adding suprapure nitric acid and internal standard followed by microwave digestion, b.-weighing a sample amount and water with an appropriate concentration of the internal standard in an Eppendorf vial, taking then an aliquot to the quartz reflector for microwave digestion with suprapure nitric acid, c.- weighing a sample amount of fresh tissue, homogenising with high speed homegenator to obtain a slurry sample which can be diluted in an ependorf vial with water an the internal standard. Then an aliquot is taken to the reflector for microwave digestion with suprapure nitric acid. Further details of sample preparation procedures will be discussed during presentation. The analysis was carried out in a Canberra spectrometer using the Kalpha lines of the Ag and Mo tubes. The elements Ca, K, Fe, Cu, Zn, Se, Mn, Rb, Br, Sr were determined. The effect of the preparation procedure was evaluated by the accuracy and precision of the results for each element and the percent of recovery. (author)
Determination of total chromium in tanned leather samples used in car industry.
Zeiner, Michaela; Rezić, Iva; Ujević, Darko; Steffan, Ilse
2011-03-01
Despite the high competition of synthetic fibers leather is nowadays still widely used for many applications. In order to ensure a sufficient stability of the skin matrix against many factors, such as microbial degradation, heat and sweat, a tanning process is indispensable. Using chromium (III) for this purpose offers a multitude of advantages, thus this way of tanning is widely applied. During the use of chromium tanned leather as clothing material as well as for decoration/covering purposes, chromium is extracted from the leather and may then cause nocuous effects to human skin, e.g. allergic reactions. Thus the knowledge of the total chromium content of leather samples expected to come into prolonged touch with human skin is very important. In car industry leather is used as cover for seats, steering wheel and gearshift lever The chromium contents often chromium tanned leather samples used in car industry were determined. First all samples were dried at 65 degrees C overnight and then cut in small pieces using a ceramic knife, weighed and analyzed by inductively coupled plasma--optical emission spectrometry (ICP-OES) after acidic microwave assisted digestion. The total chromium amounts found were in the range from 19 mg/g up to 32 mg/g. The extraction yield of chromium from leather samples in sweat is approximately 2-7%. Thus especially during long journeys in summer chromium can be extracted in amounts which may cause nocuous effects for example on the palm of the hands or on the back.
[Hygienic evaluation of the total mutagenic activity of snow samples from Magnitogorsk].
Legostaeva, T B; Ingel', F I; Antipanova, N A; Iurchenko, V V; Iuretseva, N A; Kotliar, N N
2010-01-01
The paper gives the results of 4-year monitoring of the total mutagenic activity of snow samples from different Magnitogork areas in a test for induction of dominant lethal mutations (DLM) in the gametes of Drosophila melanogaster. An association was first found between the rate of DLM and the content of some chemical compounds in the ambient air and snow samples; moreover all the substances present in the samples, which had found genotoxic effects, showed a positive correlation with the rate of DLM. Furthermore, direct correlations were first established between the rate of DLM and the air pollution index and morbidity rates in 5-7-year-old children residing in the areas under study. The findings allow the test for induction of dominant lethal mutations (DLM) in the gametes of Drosophila melanogaster to be recommended due to its unique informative and prognostic value for monitoring ambient air pollution and for extensive use in the risk assessment system.
Health indicators: eliminating bias from convenience sampling estimators.
Hedt, Bethany L; Pagano, Marcello
2011-02-28
Public health practitioners are often called upon to make inference about a health indicator for a population at large when the sole available information are data gathered from a convenience sample, such as data gathered on visitors to a clinic. These data may be of the highest quality and quite extensive, but the biases inherent in a convenience sample preclude the legitimate use of powerful inferential tools that are usually associated with a random sample. In general, we know nothing about those who do not visit the clinic beyond the fact that they do not visit the clinic. An alternative is to take a random sample of the population. However, we show that this solution would be wasteful if it excluded the use of available information. Hence, we present a simple annealing methodology that combines a relatively small, and presumably far less expensive, random sample with the convenience sample. This allows us to not only take advantage of powerful inferential tools, but also provides more accurate information than that available from just using data from the random sample alone. Copyright © 2011 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Tudor DRUGAN
2003-08-01
Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.
An alternative procedure for estimating the population mean in simple random sampling
Directory of Open Access Journals (Sweden)
Housila P. Singh
2012-03-01
Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.
Scholl, M.A.; Ingebritsen, S.E.
1995-01-01
Six-month cumulative precipitation samples provide estimates of bulk deposition of sulfate and chloride for the southeast part of the Island of Hawaii during four time periods: August 1991 to February 1992, February 1992 to September 1992, March 1993 to September 1993, and September 1993 to February 1994. Total estimated bulk deposition rates for sulfate ranged from 0.12 to 24 grams per square meter per 180 days, and non-seasalt sulfate deposition ranged from 0.06 to 24 grams per square meter per 180 days. Patterns of non-seasalt sulfate deposition were generally related to prevailing wind directions and the proximity of the collection site to large sources of sulfur gases, namely Kilauea Volcano's summit and East Rift Zone eruption. Total chloride deposition from bulk precipitation samples ranged from 0.01 to 17 grams per square meter per 180 days. Chloride appeared to be predominantly from oceanic sources, as non- seasalt chloride deposition was near zero for most sites.
Sequential sampling, magnitude estimation, and the wisdom of crowds
DEFF Research Database (Denmark)
Nash, Ulrik W.
2017-01-01
in the wisdom of crowds indicated by judgment distribution skewness. The present study reports findings from an experiment on magnitude estimation and supports these predictions. The study moreover demonstrates that systematic errors by groups of people can be corrected using information about the judgment...
Using Mobile Device Samples to Estimate Traffic Volumes
2017-12-01
In this project, TTI worked with StreetLight Data to evaluate a beta version of its traffic volume estimates derived from global positioning system (GPS)-based mobile devices. TTI evaluated the accuracy of average annual daily traffic (AADT) volume :...
Zhang, Yuanyuan; Ng, Ding-Quan; Lin, Yi-Pin
2012-07-01
Lead and its compounds are toxic and can harm human health, especially the intelligence development in children. Accurate measurement of total lead present in drinking water is crucial in determining the extent of lead contamination and human exposure due to drinking water consumption. The USEPA method for total lead measurement (no. 200.8) is often used to analyze lead levels in drinking water. However, in the presence of high concentration of the tetravalent lead corrosion product PbO(2), the USEPA method was not able to fully recover particulate lead due to incomplete dissolution of PbO(2) particles during strong acid digestion. In this study, a new procedure that integrates membrane separation, iodometric PbO(2) measurement, strong acid digestion and ICP-MS measurement was proposed and evaluated for accurate total lead measurement and quantification of different lead fractions including soluble Pb(2+), particulate Pb(II) carbonate and PbO(2) in drinking water samples. The proposed procedure was evaluated using drinking water reconstituted with spiked Pb(2+), spiked particulate Pb(II) carbonate and in situ formed or spiked PbO(2). Recovery tests showed that the proposed procedure and the USEPA method can achieve 93-112% and 86-103% recoveries respectively for samples containing low PbO(2) concentrations (0.018-0.076 mg Pb per L). For samples containing higher concentrations of PbO(2) (0.089-1.316 mg Pb per L), the USEPA method failed to meet the recovery requirement for total lead (85-115%) while the proposed method can achieve satisfactory recoveries (91-111%) and differentiate the soluble Pb(2+), particulate Pb(II) carbonate and PbO(2).
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous
International Nuclear Information System (INIS)
Wright, T.
1982-01-01
A new sampling procedure is introduced for estimating a population proportion. The procedure combines the ideas of inverse binomial sampling and Bernoulli sampling. An unbiased estimator is given with its variance. The procedure can be viewed as a generalization of inverse binomial sampling
Estimating Total Discharge in the Yangtze River Basin Using Satellite-Based Observations
Directory of Open Access Journals (Sweden)
Samuel A. Andam‑Akorful
2013-07-01
Full Text Available The measurement of total basin discharge along coastal regions is necessary for understanding the hydrological and oceanographic issues related to the water and energy cycles. However, only the observed streamflow (gauge-based observation is used to estimate the total fluxes from the river basin to the ocean, neglecting the portion of discharge that infiltrates to underground and directly discharges into the ocean. Hence, the aim of this study is to assess the total discharge of the Yangtze River (Chang Jiang basin. In this study, we explore the potential response of total discharge to changes in precipitation (from the Tropical Rainfall Measuring Mission—TRMM, evaporation (from four versions of the Global Land Data Assimilation—GLDAS, namely, CLM, Mosaic, Noah and VIC, and water-storage changes (from the Gravity Recovery and Climate Experiment—GRACE by using the terrestrial water budget method. This method has been validated by comparison with the observed streamflow, and shows an agreement with a root mean square error (RMSE of 14.30 mm/month for GRACE-based discharge and 20.98 mm/month for that derived from precipitation minus evaporation (P − E. This improvement of approximately 32% indicates that monthly terrestrial water-storage changes, as estimated by GRACE, cannot be considered negligible over Yangtze basin. The results for the proposed method are more accurate than the results previously reported in the literature.
Sutton, Tracey; Hopkins, Thomas; Remsen, Andrew; Burghart, Scott
2001-01-01
Sampling was conducted on the west Florida continental shelf ecosystem modeling site to estimate zooplankton grazing impact on primary production. Samples were collected with the high-resolution sampler, a towed array bearing electronic and optical sensors operating in tandem with a paired net/bottle verification system. A close biological-physical coupling was observed, with three main plankton communities: 1. a high-density inshore community dominated by larvaceans coincident with a salinity gradient; 2. a low-density offshore community dominated by small calanoid copepods coincident with the warm mixed layer; and 3. a high-density offshore community dominated by small poecilostomatoid and cyclopoid copepods and ostracods coincident with cooler, sub-pycnocline oceanic water. Both high-density communities were associated with relatively turbid water. Applying available grazing rates from the literature to our abundance data, grazing pressure mirrored the above bio-physical pattern, with the offshore sub-pycnocline community contributing ˜65% of grazing pressure despite representing only 19% of the total volume of the transect. This suggests that grazing pressure is highly localized, emphasizing the importance of high-resolution sampling to better understand plankton dynamics. A comparison of our grazing rate estimates with primary production estimates suggests that mesozooplankton do not control the fate of phytoplankton over much of the area studied (<5% grazing of daily primary production), but "hot spots" (˜25-50% grazing) do occur which may have an effect on floral composition.
DEFF Research Database (Denmark)
Mühlfeld, Christian; Papadakis, Tamara; Krasteva, Gabriela
2010-01-01
Quantitative information about the innervation is essential to analyze the structure-function relationships of organs. So far, there has been no unbiased stereological tool for this purpose. This study presents a new unbiased and efficient method to quantify the total length of axons in a given...... reference volume, illustrated on the left ventricle of the mouse heart. The method is based on the following steps: 1) estimation of the reference volume; 2) randomization of location and orientation using appropriate sampling techniques; 3) counting of nerve fiber profiles hit by a defined test area within...
2013-01-01
Background The emergence of Plasmodium falciparum resistance to artemisinins in Southeast Asia threatens the control of malaria worldwide. The pharmacodynamic hallmark of artemisinin derivatives is rapid parasite clearance (a short parasite half-life), therefore, the in vivo phenotype of slow clearance defines the reduced susceptibility to the drug. Measurement of parasite counts every six hours during the first three days after treatment have been recommended to measure the parasite clearance half-life, but it remains unclear whether simpler sampling intervals and frequencies might also be sufficient to reliably estimate this parameter. Methods A total of 2,746 parasite density-time profiles were selected from 13 clinical trials in Thailand, Cambodia, Mali, Vietnam, and Kenya. In these studies, parasite densities were measured every six hours until negative after treatment with an artemisinin derivative (alone or in combination with a partner drug). The WWARN Parasite Clearance Estimator (PCE) tool was used to estimate “reference” half-lives from these six-hourly measurements. The effect of four alternative sampling schedules on half-life estimation was investigated, and compared to the reference half-life (time zero, 6, 12, 24 (A1); zero, 6, 18, 24 (A2); zero, 12, 18, 24 (A3) or zero, 12, 24 (A4) hours and then every 12 hours). Statistical bootstrap methods were used to estimate the sampling distribution of half-lives for parasite populations with different geometric mean half-lives. A simulation study was performed to investigate a suite of 16 potential alternative schedules and half-life estimates generated by each of the schedules were compared to the “true” half-life. The candidate schedules in the simulation study included (among others) six-hourly sampling, schedule A1, schedule A4, and a convenience sampling schedule at six, seven, 24, 25, 48 and 49 hours. Results The median (range) parasite half-life for all clinical studies combined was 3.1 (0
Cuenca-Navalon, Elena; Laumen, Marco; Finocchiaro, Thomas; Steinseifer, Ulrich
2016-07-01
A physiological control algorithm is being developed to ensure an optimal physiological interaction between the ReinHeart total artificial heart (TAH) and the circulatory system. A key factor for that is the long-term, accurate determination of the hemodynamic state of the cardiovascular system. This study presents a method to determine estimation models for predicting hemodynamic parameters (pump chamber filling and afterload) from both left and right cardiovascular circulations. The estimation models are based on linear regression models that correlate filling and afterload values with pump intrinsic parameters derived from measured values of motor current and piston position. Predictions for filling lie in average within 5% from actual values, predictions for systemic afterload (AoPmean , AoPsys ) and mean pulmonary afterload (PAPmean ) lie in average within 9% from actual values. Predictions for systolic pulmonary afterload (PAPsys ) present an average deviation of 14%. The estimation models show satisfactory prediction and confidence intervals and are thus suitable to estimate hemodynamic parameters. This method and derived estimation models are a valuable alternative to implanted sensors and are an essential step for the development of a physiological control algorithm for a fully implantable TAH. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Ialongo, Cristiano; Pieri, Massimo; Bernardini, Sergio
2017-02-01
Diluting a sample to obtain a measure within the analytical range is a common task in clinical laboratories. However, for urgent samples, it can cause delays in test reporting, which can put patients' safety at risk. The aim of this work is to show a simple artificial neural network that can be used to make it unnecessary to predilute a sample using the information available through the laboratory information system. Particularly, the Multilayer Perceptron neural network built on a data set of 16,106 cardiac troponin I test records produced a correct inference rate of 100% for samples not requiring predilution and 86.2% for those requiring predilution. With respect to the inference reliability, the most relevant inputs were the presence of a cardiac event or surgery and the result of the previous assay. Therefore, such an artificial neural network can be easily implemented into a total automation framework to sensibly reduce the turnaround time of critical orders delayed by the operation required to retrieve, dilute, and retest the sample.
Estimation of radon concentration in soil and groundwater samples of Northern Rajasthan, India
International Nuclear Information System (INIS)
Mittal, Sudhir; Asha Rani; Mehra, Rohit
2015-01-01
In the present investigation, analysis of radon concentration in 20 water and soil samples collected from different locations of Bikaner and Jhunjhunu districts of Rajasthan, India has been carried out by using RAD7 an electronic Radon detector. The water samples are taken from hand pumps and tube wells having depths ranging from 50 to 600 feet. All the soil gas measurements have been carried out at 100 cm depth. The measured radon concentration in water samples lies in the range from 0.50 to 22 Bq l -1 with the mean value of 4.42 Bq l -1 . Only in one water sample radon concentration is found to be higher than the safe limit of 11 Bq l -1 recommended US Environmental Protection Agency (USEPA, 1991). The measured value of radon concentration in all ground water samples is within the safe limit from 4 to 40 Bq l -1 recommended by United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR, 2008). The total annual effective dose estimated due to radon concentration in water ranges from 1.37 to 60 μSV y -1 with the mean value of 12.08 μSV y -1 . The total annual effective dose from all locations of our studied area is found to be well within the safe limit 0.1 mSv y -1 recommended by World Health Organization (WHO, 2004) and European Council (ED, 1998). Radon measurement in soil samples varies from 941 to 10050 Bq m -3 with the mean value of 4561 Bq m -3 , The radon concentration observed from the soil samples from our study area lies within the range reported by other investigators. Moreover a positive correlation of radon concentration in water with soil samples has been observed. It was observed that the soil and water of Bikaner and Jhunjhunu districts are suitable for drinking and construction purpose without posing any health hazard. (author)
Total Discharge Estimation in the Korean Peninsula Using Multi-Satellite Products
Directory of Open Access Journals (Sweden)
Jae Young Seo
2017-07-01
Full Text Available Estimation of total discharge is necessary to understand the hydrological cycle and to manage water resources efficiently. However, the task is problematic in an area where ground observations are limited. The North Korea region is one example. Here, the total discharge was estimated based on the water balance using multiple satellite products. They are the terrestrial water storage changes (TWSC derived from the Gravity Recovery and Climate Experiment (GRACE, precipitation from the Tropical Rainfall Measuring Mission (TRMM, and evapotranspiration from the Moderate Resolution Imaging Spectroradiometer (MODIS. The satellite-based discharge was compared with land surface model products of the Global Land Data Assimilation System (GLDAS, and a positive relationship between the results was obtained (r = 0.70–0.86; bias = −9.08–16.99 mm/month; RMSE = 36.90–62.56 mm/month; NSE = 0.01–0.62. Among the four land surface models of GLDAS (CLM, Mosaic, Noah, and VIC, CLM corresponded best with the satellite-based discharge, satellite-based discharge has a tendency to slightly overestimate compared to model-based discharge (CLM, Mosaic, Noah, and VIC in the dry season. Also, the total discharge data based on the Precipitation-Runoff Modeling System (PRMS and the in situ discharge for major five river basins in South Korea show comparable seasonality and high correlation with the satellite-based discharge. In spite of the relatively low spatial resolution of GRACE, and loss of information incurred during the process of integrating three different satellite products, the proposed methodology can be a practical tool to estimate the total discharge with reasonable accuracy, especially in a region with scarce hydrologic data.
A Comparison of "Total Dust" and Inhalable Personal Sampling for Beryllium Exposure
Energy Technology Data Exchange (ETDEWEB)
Carter, Colleen M. [Tulane Univ., New Orleans, LA (United States). School of Public Health and Tropical Medicine
2012-05-09
In 2009, the American Conference of Governmental Industrial Hygienists (ACGIH) reduced the Beryllium (Be) 8-hr Time Weighted Average Threshold Limit Value (TLV-TWA) from 2.0 μg/m^{3} to 0.05 μg/m^{3} with an inhalable 'I' designation in accordance with ACGIH's particle size-selective criterion for inhalable mass. Currently, per the Department of Energy (DOE) requirements, the Lawrence Livermore National Laboratory (LLNL) is following the Occupational Health and Safety Administration (OSHA) Permissible Exposure Limit (PEL) of 2.0 μg/m^{3} as an 8-hr TWA, which is also the 2005 ACGIH TLV-TWA, and an Action Level (AL) of 0.2 μg/m^{3} and sampling is performed using the 37mm (total dust) sampling method. Since DOE is considering adopting the newer 2009 TLV guidelines, the goal of this study was to determine if the current method of sampling using the 37mm (total dust) sampler would produce results that are comparable to what would be measured using the IOM (inhalable) sampler specific to the application of high energy explosive work at LLNL's remote experimental test facility at Site 300. Side-by-side personal sampling using the two samplers was performed over an approximately two-week period during chamber re-entry and cleanup procedures following detonation of an explosive assembly containing Beryllium (Be). The average ratio of personal sampling results for the IOM (inhalable) vs. 37-mm (total dust) sampler was 1.1:1 with a P-value of 0.62, indicating that there was no statistically significant difference in the performance of the two samplers. Therefore, for the type of activity monitored during this study, the 37-mm sampling cassette would be considered a suitable alternative to the IOM sampler for collecting inhalable particulate matter, which is important given the many practical and economic advantages that it presents. However, similar comparison studies would be necessary for this conclusion to be
Estimation of individual reference intervals in small sample sizes
DEFF Research Database (Denmark)
Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz
2007-01-01
In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... from various variables such as gender, age, BMI, alcohol, smoking, and menopause. The reference intervals were compared to reference intervals calculated using IFCC recommendations. Where comparable, the IFCC calculated reference intervals had a wider range compared to the variance component models...
Automated modal parameter estimation using correlation analysis and bootstrap sampling
Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.
2018-02-01
The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to
International Nuclear Information System (INIS)
Douglas, J.G.; Meznarich, H.K.; Olsen, J.R.; Ross, G.A.; Stauffer, M.
2009-01-01
Total organic halogen (TOX) is used as a parameter to screen groundwater samples at the Hanford Site. Trending is done for each groundwater well, and changes in TOX and other screening parameters can lead to costly changes in the monitoring protocol. The Waste Sampling and Characterization Facility (WSCF) analyzes groundwater samples for TOX using the United States Environmental Protection Agency (EPA) SW-S46 method 9020B (EPA 1996a). Samples from the Soil and Groundwater Remediation Project (SGRP) are submitted to the WSCF for analysis without information regarding the source of the sample; each sample is in essence a ''blind'' sample to the laboratory. Feedback from the SGRP indicated that some of the WSCF-generated TOX data from groundwater wells had a number of outlier values based on the historical trends (Anastos 200Sa). Additionally, analysts at WSCF observed inconsistent TOX results among field sample replicates. Therefore, the WSCF lab performed an investigation of the TOX analysis to determine the cause of the outlier data points. Two causes were found that contributed to generating out-of-trend TOX data: (1) The presence of inorganic chloride in the groundwater samples: at inorganic chloride concentrations greater than about 10 parts per million (ppm), apparent TOX values increase with increasing chloride concentration. A parallel observation is the increase in apparent breakthrough of TOX from the first to the second activated-carbon adsorption tubes with increasing inorganic chloride concentration. (2) During the sample preparation step, excessive purging of the adsorption tubes with oxygen pressurization gas after sample loading may cause channeling in the activated carbon bed. This channeling leads to poor removal of inorganic chloride during the subsequent wash step with aqueous potassium nitrate. The presence of this residual inorganic chloride then produces erroneously high TOX values. Changes in sample preparation were studied to more effectively
A geostatistical estimation of zinc grade in bore-core samples
International Nuclear Information System (INIS)
Starzec, A.
1987-01-01
Possibilities and preliminary results of geostatistical interpretation of the XRF determination of zinc in bore-core samples are considered. For the spherical model of the variogram the estimation variance of grade in a disk-shape sample (estimated from the grade on the circumference sample) is calculated. Variograms of zinc grade in core samples are presented and examples of the grade estimation are discussed. 4 refs., 7 figs., 1 tab. (author)
Sampling strategy for estimating human exposure pathways to consumer chemicals
Papadopoulou, Eleni; Padilla-Sanchez, Juan A.; Collins, Chris D.; Cousins, Ian T.; Covaci, Adrian; de Wit, Cynthia A.; Leonards, Pim E.G.; Voorspoels, Stefan; Thomsen, Cathrine; Harrad, Stuart; Haug, Line S.
2016-01-01
Human exposure to consumer chemicals has become a worldwide concern. In this work, a comprehensive sampling strategy is presented, to our knowledge being the first to study all relevant exposure pathways in a single cohort using multiple methods for assessment of exposure from each exposure pathway.
Estimates of the Sampling Distribution of Scalability Coefficient H
Van Onna, Marieke J. H.
2004-01-01
Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different
Directory of Open Access Journals (Sweden)
Joana B.M. Almeida
2013-12-01
Full Text Available The objective of this study was to develop a total economic merit index that identifies more profitable animals using Portugal as a case study to illustrate the recent economic changes in milk production. Economic values were estimated following future global prices and EU policy, and taking into consideration the priorities of the Portuguese dairy sector. Economic values were derived using an objective system analysis with a positive approach, that involved the comparison of several alternatives, using real technical and economic data from national dairy farms. The estimated relative economic values revealed a high importance of production traits, low for morphological traits and a value of zero for somatic cell score. According to several future market expectations, three scenarios for milk production were defined: a realistic, a pessimistic and an optimistic setting, each with projected future economic values. Responses to selection and efficiency of selection of the indices were compared to a fourth scenario that represents the current selection situation in Portugal, based on individual estimated breeding values for milk yield. Although profit resulting from sale of milk per average lactation in the optimistic scenario was higher than in the realistic scenario, the volatility of future economic conditions and uncertainty about the future milk pricing system should be considered. Due to this market instability, genetic improvement programs require new definitions of profit functions for the near future. Effective genetic progress direction must be verified so that total economic merit formulae can be adjusted and selection criteria redirected to the newly defined target goals.
Gis Approach to Estimation of the Total Phosphorous Transfer in the Pilica River Lowland Catchment
Directory of Open Access Journals (Sweden)
Magnuszewski Artur
2014-09-01
Full Text Available In this paper, the Pilica River catchment (central Poland is analyzed with a focus on understanding the total phosphorous transfer along the river system which also contains the large artificial Sulejów Reservoir. The paper presents a GIS method for estimating the total phosphorous (TP load from proxy data representing sub-catchment land use and census data. The modelled load of TP is compared to the actual transfer of TP in the Pilica River system. The results shows that the metrics of connectivity between river system and dwelling areas as well as settlement density in the sub-catchments are useful predictors of the total phosphorous load. The presence of a large reservoir in the middle course of the river can disrupt nutrient transport along a river continuum by trapping and retaining suspended sediment and its associated TP load. Analysis of the indirect estimation of TP loads with the GIS analysis can be useful for identifying beneficial reservoir locations in a catchment. The study has shown that the Sulejów Reservoir has been located in a subcatchment with a largest load of the TP, and this feature helps determine the problem of reservoir eutrphication
Alberti, Giancarla; Biesuz, Raffaela; Pesavento, Maria
2008-12-01
Different natural water samples were investigated to determine the total concentration and the distribution of species for Cu(II), Pb(II), Al(III) and U(VI). The proposed method, named resin titration (RT), was developed in our laboratory to investigate the distribution of species for metal ions in complex matrices. It is a competition method, in which a complexing resin competes with natural ligands present in the sample to combine with the metal ions. In the present paper, river, estuarine and seawater samples, collected during a cruise in Adriatic Sea, were investigated. For each sample, two RTs were performed, using different complexing resins: the iminodiacetic Chelex 100 and the carboxylic Amberlite CG50. In this way, it was possible to detect different class of ligands. Satisfactory results have been obtained and are commented on critically. They were summarized by principal component analysis (PCA) and the correlations with physicochemical parameters allowed one to follow the evolution of the metals along the considered transect. It should be pointed out that, according to our findings, the ligands responsible for metal ions complexation are not the major components of the water system, since they form considerably weaker complexes.
Respondent driven sampling: determinants of recruitment and a method to improve point estimation.
Directory of Open Access Journals (Sweden)
Nicky McCreesh
Full Text Available Respondent-driven sampling (RDS is a variant of a link-tracing design intended for generating unbiased estimates of the composition of hidden populations that typically involves giving participants several coupons to recruit their peers into the study. RDS may generate biased estimates if coupons are distributed non-randomly or if potential recruits present for interview non-randomly. We explore if biases detected in an RDS study were due to either of these mechanisms, and propose and apply weights to reduce bias due to non-random presentation for interview.Using data from the total population, and the population to whom recruiters offered their coupons, we explored how age and socioeconomic status were associated with being offered a coupon, and, if offered a coupon, with presenting for interview. Population proportions were estimated by weighting by the assumed inverse probabilities of being offered a coupon (as in existing RDS methods, and also of presentation for interview if offered a coupon by age and socioeconomic status group.Younger men were under-recruited primarily because they were less likely to be offered coupons. The under-recruitment of higher socioeconomic status men was due in part to them being less likely to present for interview. Consistent with these findings, weighting for non-random presentation for interview by age and socioeconomic status group greatly improved the estimate of the proportion of men in the lowest socioeconomic group, reducing the root-mean-squared error of RDS estimates of socioeconomic status by 38%, but had little effect on estimates for age. The weighting also improved estimates for tribe and religion (reducing root-mean-squared-errors by 19-29%, but had little effect for sexual activity or HIV status.Data collected from recruiters on the characteristics of men to whom they offered coupons may be used to reduce bias in RDS studies. Further evaluation of this new method is required.
Global CO2 fluxes estimated from GOSAT retrievals of total column CO2
Directory of Open Access Journals (Sweden)
S. Basu
2013-09-01
Full Text Available We present one of the first estimates of the global distribution of CO2 surface fluxes using total column CO2 measurements retrieved by the SRON-KIT RemoTeC algorithm from the Greenhouse gases Observing SATellite (GOSAT. We derive optimized fluxes from June 2009 to December 2010. We estimate fluxes from surface CO2 measurements to use as baselines for comparing GOSAT data-derived fluxes. Assimilating only GOSAT data, we can reproduce the observed CO2 time series at surface and TCCON sites in the tropics and the northern extra-tropics. In contrast, in the southern extra-tropics GOSAT XCO2 leads to enhanced seasonal cycle amplitudes compared to independent measurements, and we identify it as the result of a land–sea bias in our GOSAT XCO2 retrievals. A bias correction in the form of a global offset between GOSAT land and sea pixels in a joint inversion of satellite and surface measurements of CO2 yields plausible global flux estimates which are more tightly constrained than in an inversion using surface CO2 data alone. We show that assimilating the bias-corrected GOSAT data on top of surface CO2 data (a reduces the estimated global land sink of CO2, and (b shifts the terrestrial net uptake of carbon from the tropics to the extra-tropics. It is concluded that while GOSAT total column CO2 provide useful constraints for source–sink inversions, small spatiotemporal biases – beyond what can be detected using current validation techniques – have serious consequences for optimized fluxes, even aggregated over continental scales.
Total decay heat estimates in a proto-type fast reactor
International Nuclear Information System (INIS)
Sridharan, M.S.
2003-01-01
Full text: In this paper, total decay heat values generated in a proto-type fast reactor are estimated. These values are compared with those of certain fast reactors. Simple analytical fits are also obtained for these values which can serve as a handy and convenient tool in engineering design studies. These decay heat values taken as their ratio to the nominal operating power are, in general, applicable to any typical plutonium based fast reactor and are useful inputs to the design of decay-heat removal systems
Analysis of total least squares in estimating the parameters of a mortar trajectory
Energy Technology Data Exchange (ETDEWEB)
Lau, D.L.; Ng, L.C.
1994-12-01
Least Squares (LS) is a method of curve fitting used with the assumption that error exists in the observation vector. The method of Total Least Squares (TLS) is more useful in cases where there is error in the data matrix as well as the observation vector. This paper describes work done in comparing the LS and TLS results for parameter estimation of a mortar trajectory based on a time series of angular observations. To improve the results, we investigated several derivations of the LS and TLS methods, and early findings show TLS provided slightly, 10%, improved results over the LS method.
Near-real-time Estimation and Forecast of Total Precipitable Water in Europe
Bartholy, J.; Kern, A.; Barcza, Z.; Pongracz, R.; Ihasz, I.; Kovacs, R.; Ferencz, C.
2013-12-01
Information about the amount and spatial distribution of atmospheric water vapor (or total precipitable water) is essential for understanding weather and the environment including the greenhouse effect, the climate system with its feedbacks and the hydrological cycle. Numerical weather prediction (NWP) models need accurate estimations of water vapor content to provide realistic forecasts including representation of clouds and precipitation. In the present study we introduce our research activity for the estimation and forecast of atmospheric water vapor in Central Europe using both observations and models. The Eötvös Loránd University (Hungary) operates a polar orbiting satellite receiving station in Budapest since 2002. This station receives Earth observation data from polar orbiting satellites including MODerate resolution Imaging Spectroradiometer (MODIS) Direct Broadcast (DB) data stream from satellites Terra and Aqua. The received DB MODIS data are automatically processed using freely distributed software packages. Using the IMAPP Level2 software total precipitable water is calculated operationally using two different methods. Quality of the TPW estimations is a crucial question for further application of the results, thus validation of the remotely sensed total precipitable water fields is presented using radiosonde data. In a current research project in Hungary we aim to compare different estimations of atmospheric water vapor content. Within the frame of the project we use a NWP model (DBCRAS; Direct Broadcast CIMSS Regional Assimilation System numerical weather prediction software developed by the University of Wisconsin, Madison) to forecast TPW. DBCRAS uses near real time Level2 products from the MODIS data processing chain. From the wide range of the derived Level2 products the MODIS TPW parameter found within the so-called mod07 results (Atmospheric Profiles Product) and the cloud top pressure and cloud effective emissivity parameters from the so
Directory of Open Access Journals (Sweden)
Patrick Habecker
Full Text Available Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations via traditional survey tools such as telephone or mail surveys--by asking a representative sample to estimate the number of people they know who are members of such a "hidden" subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation "trimming" to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights.
Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling
DEFF Research Database (Denmark)
Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper
2014-01-01
The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...
Manure sampling procedures and nutrient estimation by the hydrometer method for gestation pigs.
Zhu, Jun; Ndegwa, Pius M; Zhang, Zhijian
2004-05-01
Three manure agitation procedures were examined in this study (vertical mixing, horizontal mixing, and no mixing) to determine the efficacy of producing a representative manure sample. The total solids content for manure from gestation pigs was found to be well correlated with the total nitrogen (TN) and total phosphorus (TP) concentrations in the manure, with highly significant correlation coefficients of 0.988 and 0.994, respectively. Linear correlations were observed between the TN and TP contents and the manure specific gravity (correlation coefficients: 0.991 and 0.987, respectively). Therefore, it may be inferred that the nutrients in pig manure can be estimated with reasonable accuracy by measuring the liquid manure specific gravity. A rapid testing method for manure nutrient contents (TN and TP) using a soil hydrometer was also evaluated. The results showed that the estimating error increased from +/-10% to +/-30% with the decrease in TN (from 1000 to 100 ppm) and TP (from 700 to 50 ppm) concentrations in the manure. Data also showed that the hydrometer readings had to be taken within 10 s after mixing to avoid reading drift in specific gravity due to the settling of manure solids.
Energy Technology Data Exchange (ETDEWEB)
Kulesh, N.A., E-mail: nikita.kulesh@urfu.ru [Ural Federal University, Mira 19, 620002 Ekaterinburg (Russian Federation); Novoselova, I.P. [Ural Federal University, Mira 19, 620002 Ekaterinburg (Russian Federation); Immanuel Kant Baltic Federal University, 236041 Kaliningrad (Russian Federation); Safronov, A.P. [Ural Federal University, Mira 19, 620002 Ekaterinburg (Russian Federation); Institute of Electrophysics UD RAS, Amundsen 106, 620016 Ekaterinburg (Russian Federation); Beketov, I.V.; Samatov, O.M. [Institute of Electrophysics UD RAS, Amundsen 106, 620016 Ekaterinburg (Russian Federation); Kurlyandskaya, G.V. [Ural Federal University, Mira 19, 620002 Ekaterinburg (Russian Federation); University of the Basque Country UPV-EHU, 48940 Leioa (Spain); Morozova, M. [Ural Federal University, Mira 19, 620002 Ekaterinburg (Russian Federation); Denisova, T.P. [Irkutsk State University, Karl Marks 1, 664003 Irkutsk (Russian Federation)
2016-10-01
In this study, total reflection x-ray fluorescent (TXRF) spectrometry was applied for the evaluation of iron concentration in ferrofluids and biological samples containing iron oxide magnetic nanoparticles obtained by the laser target evaporation technique. Suspensions of maghemite nanoparticles of different concentrations were used to estimate the limitation of the method for the evaluation of nanoparticle concentration in the range of 1–5000 ppm in absence of organic matrix. Samples of single-cell yeasts grown in the nutrient media containing maghemite nanoparticles were used to study the nanoparticle absorption mechanism. The obtained results were analyzed in terms of applicability of TXRF for quantitative analysis in a wide range of iron oxide nanoparticle concentrations for biological samples and ferrofluids with a simple established protocol of specimen preparation. - Highlights: • Ferrofluids and yeasts samples were analysed by TXRF spectroscopy. • Simple protocol for iron quantification by means of TXRF was proposed. • Results were combined with magnetic, structural, and morphological characterization. • Preliminary conclusion on nanoparticles uptake mechanism was made.
International Nuclear Information System (INIS)
Kulesh, N.A.; Novoselova, I.P.; Safronov, A.P.; Beketov, I.V.; Samatov, O.M.; Kurlyandskaya, G.V.; Morozova, M.; Denisova, T.P.
2016-01-01
In this study, total reflection x-ray fluorescent (TXRF) spectrometry was applied for the evaluation of iron concentration in ferrofluids and biological samples containing iron oxide magnetic nanoparticles obtained by the laser target evaporation technique. Suspensions of maghemite nanoparticles of different concentrations were used to estimate the limitation of the method for the evaluation of nanoparticle concentration in the range of 1–5000 ppm in absence of organic matrix. Samples of single-cell yeasts grown in the nutrient media containing maghemite nanoparticles were used to study the nanoparticle absorption mechanism. The obtained results were analyzed in terms of applicability of TXRF for quantitative analysis in a wide range of iron oxide nanoparticle concentrations for biological samples and ferrofluids with a simple established protocol of specimen preparation. - Highlights: • Ferrofluids and yeasts samples were analysed by TXRF spectroscopy. • Simple protocol for iron quantification by means of TXRF was proposed. • Results were combined with magnetic, structural, and morphological characterization. • Preliminary conclusion on nanoparticles uptake mechanism was made.
Peng, Yijie; Fu, Michael C.; Hu, Jian Qiang; Heidergott, Bernd
In this paper, we propose a new unbiased stochastic derivative estimator in a framework that can handle discontinuous sample performances with structural parameters. This work extends the three most popular unbiased stochastic derivative estimators: (1) infinitesimal perturbation analysis (IPA), (2)
Sample size methods for estimating HIV incidence from cross-sectional surveys.
Konikoff, Jacob; Brookmeyer, Ron
2015-12-01
Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.
Total Body Capacitance for Estimating Human Basal Metabolic Rate in an Egyptian Population
M. Abdel-Mageed, Samir; I. Mohamed, Ehab
2016-01-01
Determining basal metabolic rate (BMR) is important for estimating total energy needs in the human being yet, concerns have been raised regarding the suitability of sex-specific equations based on age and weight for its calculation on an individual or population basis. It has been shown that body cell mass (BCM) is the body compartment responsible for BMR. The objectives of this study were to investigate the relationship between total body capacitance (TBC), which is considered as an expression for BCM, and BMR and to develop a formula for calculating BMR in comparison with widely used equations. Fifty healthy nonsmoking male volunteers [mean age (± SD): 24.93 ± 4.15 year and body mass index (BMI): 25.63 ± 3.59 kg/m2] and an equal number of healthy nonsmoking females matched for age and BMI were recruited for the study. TBC and BMR were measured for all participants using octopolar bioelectric impedance analysis and indirect calorimetry techniques, respectively. A significant regressing equation based on the covariates: sex, weight, and TBC for estimating BMR was derived (R=0.96, SEE=48.59 kcal, and P<0.0001), which will be useful for nutritional and health status assessment for both individuals and populations. PMID:27127453
A test of alternative estimators for volume at time 1 from remeasured point samples
Francis A. Roesch; Edwin J. Green; Charles T. Scott
1993-01-01
Two estimators for volume at time 1 for use with permanent horizontal point samples are evaluated. One estimator, used traditionally, uses only the trees sampled at time 1, while the second estimator, originally presented by Roesch and coauthors (F.A. Roesch, Jr., E.J. Green, and C.T. Scott. 1989. For. Sci. 35(2):281-293). takes advantage of additional sample...
Prediction equation for estimating total daily energy requirements of special operations personnel.
Barringer, N D; Pasiakos, S M; McClung, H L; Crombie, A P; Margolis, L M
2018-01-01
Special Operations Forces (SOF) engage in a variety of military tasks with many producing high energy expenditures, leading to undesired energy deficits and loss of body mass. Therefore, the ability to accurately estimate daily energy requirements would be useful for accurate logistical planning. Generate a predictive equation estimating energy requirements of SOF. Retrospective analysis of data collected from SOF personnel engaged in 12 different SOF training scenarios. Energy expenditure and total body water were determined using the doubly-labeled water technique. Physical activity level was determined as daily energy expenditure divided by resting metabolic rate. Physical activity level was broken into quartiles (0 = mission prep, 1 = common warrior tasks, 2 = battle drills, 3 = specialized intense activity) to generate a physical activity factor (PAF). Regression analysis was used to construct two predictive equations (Model A; body mass and PAF, Model B; fat-free mass and PAF) estimating daily energy expenditures. Average measured energy expenditure during SOF training was 4468 (range: 3700 to 6300) Kcal·d- 1 . Regression analysis revealed that physical activity level ( r = 0.91; P plan appropriate feeding regimens to meet SOF nutritional requirements across their mission profile.
Determination of total alpha activity index in samples of radioactive wastes
International Nuclear Information System (INIS)
Galicia C, F. J.
2015-01-01
This study aimed to develop a methodology of preparation and quantification of samples containing radionuclides beta and/or alpha emitters, to determine the rates of alpha and beta total activity of radioactive waste samples. For this, a device of planchettes preparer was designed, to assist the planchettes preparation in a controlled environment and free of corrosive vapors. Planchettes were prepared in three means: nitrate, carbonate and sulfate, to different mass thickness, natural uranium (alpha and beta emitter) and in case of Sr-90 (beta emitter pure) only in half nitrate; and these planchettes were quantified in an alpha/beta counter, in order to construct the self-absorption curves for alpha and beta particles. These curves are necessary to determine the rate of alpha-beta activity of any sample because they provide the self-absorption correction factor to be applied in calculating the index. Samples with U were prepared with the help of the device of planchettes preparer and subsequently were analyzed in the proportional counter Mpc-100 Pic brand. Samples with Sr-90 were prepared without the device to see if there was a different behavior with respect to obtaining mass thickness. Similarly they were calcined and carried out count in the Mpc-100. To perform the count, first the parameters of counter operating were determined: operating voltages for alpha and beta particles 630 and 1500 V respectively, a count routine was generated where the time and count type were adjusted, and counting efficiencies for alpha and beta particles, with the aid of calibration sources of 210 Po for alphas and 90 Sr for betas. According to the results, the counts per minute will decrease as increasing the mass thickness of the sample (self-absorption curve), adjusting this behavior to an exponential function in all cases studied. The minor self-absorption of alpha and beta particles in the case of U was obtained in sulfate medium. The self-absorption curves of Sr-90 follow the
Metals determination in coffee sample by total reflection X-ray fluorescence analysis (TXRF)
International Nuclear Information System (INIS)
Vives, Ana Elisa Sirito de
2005-01-01
The objective of this study was to evaluate the inorganic concentration in five brands of coffee, three of them nationally marketed and the others of an exportation kind. The samples were prepared by infusion with deionized water. To carry out the calibration, standard solutions were prepared with different concentrations of Al, Si, K, Ca, Ti, Cr, Fe, Ni, Zn and Se. The measurements were carried out using a white beam of synchrotron radiation for excitation and a Si (Li) semiconductor detector for detection. By employing Synchrotron Radiation Total Reflection X-Ray Fluorescence Analysis (SR-TXRF) it was possible to evaluate the concentrations of P, S, Cl, K, Ca, Mn, Fe, Cu, Zn, Rb and Ba. The detection limits for 300 s counting time were in the range of 0.03 (Ca) to 30 ng.g -1 (Rb), respectively. (author)
Metals determination in coffee sample by total reflection X-ray fluorescence analysis (TXRF)
Energy Technology Data Exchange (ETDEWEB)
Vives, Ana Elisa Sirito de [Universidade Metodista de Piracicaba (UNIMEP), Santa Barbara D' Oeste, SP (Brazil). Faculdade de Engenharia, Arquitetura e Urbanismo]. E-mail: aesvives@unimep.br; Moreira, Silvana [Universidade Estadual de Campinas, SP (Brazil). Faculdade de Engenharia Civil, Arquitetura e Urbanismo]. E-mail: Silvana@fec.unicamp.br; Brienza, Sandra Maria Boscolo [ Universidade Metodista de Piracicaba (UNIMEP), Piracicaba, SP (Brazil). Faculdade de Ciencias Matematicas, da Natureza e de Tecnologia da Informacao]. E-mail: sbrienza@unimep.br; Zucchi, Orgheda Luiza Araujo Domingues [Sao Paulo Univ., Ribeirao Preto, SP (Brazil). Faculdade de Ciencias Farmaceuticas]. E-mail: olzucchi@fcfrp.usp.br; Nascimento Filho, Virgilio Franco do [Centro de Energia Nuclear na Agricultura (CENA), Piracicaba, SP (Brazil)]. E-mail: virgilio@cena.usp.br
2005-07-01
The objective of this study was to evaluate the inorganic concentration in five brands of coffee, three of them nationally marketed and the others of an exportation kind. The samples were prepared by infusion with deionized water. To carry out the calibration, standard solutions were prepared with different concentrations of Al, Si, K, Ca, Ti, Cr, Fe, Ni, Zn and Se. The measurements were carried out using a white beam of synchrotron radiation for excitation and a Si (Li) semiconductor detector for detection. By employing Synchrotron Radiation Total Reflection X-Ray Fluorescence Analysis (SR-TXRF) it was possible to evaluate the concentrations of P, S, Cl, K, Ca, Mn, Fe, Cu, Zn, Rb and Ba. The detection limits for 300 s counting time were in the range of 0.03 (Ca) to 30 ng.g{sup -1} (Rb), respectively. (author)
Pamula, Venkata Rajesh; Valero-Sarmiento, Jose Manuel; Yan, Long; Bozkurt, Alper; Hoof, Chris Van; Helleputte, Nick Van; Yazicioglu, Refet Firat; Verhelst, Marian
2017-06-01
A compressive sampling (CS) photoplethysmographic (PPG) readout with embedded feature extraction to estimate heart rate (HR) directly from compressively sampled data is presented. It integrates a low-power analog front end together with a digital back end to perform feature extraction to estimate the average HR over a 4 s interval directly from compressively sampled PPG data. The application-specified integrated circuit (ASIC) supports uniform sampling mode (1x compression) as well as CS modes with compression ratios of 8x, 10x, and 30x. CS is performed through nonuniformly subsampling the PPG signal, while feature extraction is performed using least square spectral fitting through Lomb-Scargle periodogram. The ASIC consumes 172 μ W of power from a 1.2 V supply while reducing the relative LED driver power consumption by up to 30 times without significant loss of relevant information for accurate HR estimation.
Lumme, E.; Pomoell, J.; Kilpua, E. K. J.
2017-12-01
Estimates of the photospheric magnetic, electric, and plasma velocity fields are essential for studying the dynamics of the solar atmosphere, for example through the derivative quantities of Poynting and relative helicity flux and using the fields to obtain the lower boundary condition for data-driven coronal simulations. In this paper we study the performance of a data processing and electric field inversion approach that requires only high-resolution and high-cadence line-of-sight or vector magnetograms, which we obtain from the Helioseismic and Magnetic Imager (HMI) onboard Solar Dynamics Observatory (SDO). The approach does not require any photospheric velocity estimates, and the lacking velocity information is compensated for using ad hoc assumptions. We show that the free parameters of these assumptions can be optimized to reproduce the time evolution of the total magnetic energy injection through the photosphere in NOAA AR 11158, when compared to recent state-of-the-art estimates for this active region. However, we find that the relative magnetic helicity injection is reproduced poorly, reaching at best a modest underestimation. We also discuss the effect of some of the data processing details on the results, including the masking of the noise-dominated pixels and the tracking method of the active region, neither of which has received much attention in the literature so far. In most cases the effect of these details is small, but when the optimization of the free parameters of the ad hoc assumptions is considered, a consistent use of the noise mask is required. The results found in this paper imply that the data processing and electric field inversion approach that uses only the photospheric magnetic field information offers a flexible and straightforward way to obtain photospheric magnetic and electric field estimates suitable for practical applications such as coronal modeling studies.
International Nuclear Information System (INIS)
Sánchez-Oro, J.; Duarte, A.; Salcedo-Sanz, S.
2016-01-01
Highlights: • The total energy demand in Spain is estimated with a Variable Neighborhood algorithm. • Socio-economic variables are used, and one year ahead prediction horizon is considered. • Improvement of the prediction with an Extreme Learning Machine network is considered. • Experiments are carried out in real data for the case of Spain. - Abstract: Energy demand prediction is an important problem whose solution is evaluated by policy makers in order to take key decisions affecting the economy of a country. A number of previous approaches to improve the quality of this estimation have been proposed in the last decade, the majority of them applying different machine learning techniques. In this paper, the performance of a robust hybrid approach, composed of a Variable Neighborhood Search algorithm and a new class of neural network called Extreme Learning Machine, is discussed. The Variable Neighborhood Search algorithm is focused on obtaining the most relevant features among the set of initial ones, by including an exponential prediction model. While previous approaches consider that the number of macroeconomic variables used for prediction is a parameter of the algorithm (i.e., it is fixed a priori), the proposed Variable Neighborhood Search method optimizes both: the number of variables and the best ones. After this first step of feature selection, an Extreme Learning Machine network is applied to obtain the final energy demand prediction. Experiments in a real case of energy demand estimation in Spain show the excellent performance of the proposed approach. In particular, the whole method obtains an estimation of the energy demand with an error lower than 2%, even when considering the crisis years, which are a real challenge.
A Convenient Method for Estimation of the Isotopic Abundance in Uranium Bearing Samples
International Nuclear Information System (INIS)
AI -Saleh, F.S.; AI-Mukren, Alj.H.; Farouk, M.A.
2008-01-01
A convenient and simple method for estimation of the isotopic abundance in some uranium bearing samples using gamma-ray spectrometry is developed using a hyper pure germanium spectrometer and a standard uranium sample with known isotopic abundance
Sampling designs and methods for estimating fish-impingement losses at cooling-water intakes
International Nuclear Information System (INIS)
Murarka, I.P.; Bodeau, D.J.
1977-01-01
Several systems for estimating fish impingement at power plant cooling-water intakes are compared to determine the most statistically efficient sampling designs and methods. Compared to a simple random sampling scheme the stratified systematic random sampling scheme, the systematic random sampling scheme, and the stratified random sampling scheme yield higher efficiencies and better estimators for the parameters in two models of fish impingement as a time-series process. Mathematical results and illustrative examples of the applications of the sampling schemes to simulated and real data are given. Some sampling designs applicable to fish-impingement studies are presented in appendixes
Estimation of the sugar cane cultivated area from LANDSAT images using the two phase sampling method
Parada, N. D. J. (Principal Investigator); Cappelletti, C. A.; Mendonca, F. J.; Lee, D. C. L.; Shimabukuro, Y. E.
1982-01-01
A two phase sampling method and the optimal sampling segment dimensions for the estimation of sugar cane cultivated area were developed. This technique employs visual interpretations of LANDSAT images and panchromatic aerial photographs considered as the ground truth. The estimates, as a mean value of 100 simulated samples, represent 99.3% of the true value with a CV of approximately 1%; the relative efficiency of the two phase design was 157% when compared with a one phase aerial photographs sample.
Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs
Directory of Open Access Journals (Sweden)
Faqir Muhammad
2007-01-01
Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.
Gorur, F Korkmaz; Camgoz, H
2014-10-01
The level of natural radioactivity for Bolu province of north-western Turkey was assessed in this study. There is no information about radioactivity measurement reported in water samples in the Bolu province so far. For this reason, gross α and β activities of 55 different water samples collected from tap, spring, mineral, river and lake waters in Bolu were determined. The mean activity concentrations were 68.11 mBq L(-1), 169.44 mBq L(-1) for gross α and β in tap water. For all samples the gross β activity is always higher than the gross α activity. All value of the gross α were lower than the limit value of 500 mBq L(-1) while two spring and one mineral water samples were found to have gross β activity concentrations of greater than 1000 mBq L(-1). The associated age-dependent dose from all water ingestion in Bolu was estimated. The total dose for adults had an average value exceeds the WHO recommended limit value. The risk levels from the direct ingestion of the natural radionuclides in tap and mineral water in Bolu were determinated. The mean (210)Po and (228)Ra risk the value of tap and mineral waters slightly exceeds what some consider on acceptable risk of 10(-4) or less. Copyright © 2014 Elsevier Ltd. All rights reserved.
Estimated ventricle size using Evans index: reference values from a population-based sample.
Jaraj, D; Rabiei, K; Marlow, T; Jensen, C; Skoog, I; Wikkelsø, C
2017-03-01
Evans index is an estimate of ventricular size used in the diagnosis of idiopathic normal-pressure hydrocephalus (iNPH). Values >0.3 are considered pathological and are required by guidelines for the diagnosis of iNPH. However, there are no previous epidemiological studies on Evans index, and normal values in adults are thus not precisely known. We examined a representative sample to obtain reference values and descriptive data on Evans index. A population-based sample (n = 1235) of men and women aged ≥70 years was examined. The sample comprised people living in private households and residential care, systematically selected from the Swedish population register. Neuropsychiatric examinations, including head computed tomography, were performed between 1986 and 2000. Evans index ranged from 0.11 to 0.46. The mean value in the total sample was 0.28 (SD, 0.04) and 20.6% (n = 255) had values >0.3. Among men aged ≥80 years, the mean value of Evans index was 0.3 (SD, 0.03). Individuals with dementia had a mean value of Evans index of 0.31 (SD, 0.05) and those with radiological signs of iNPH had a mean value of 0.36 (SD, 0.04). A substantial number of subjects had ventricular enlargement according to current criteria. Clinicians and researchers need to be aware of the range of values among older individuals. © 2017 EAN.
Elze, J; Liebler-Tenorio, E; Ziller, M; Köhler, H
2013-07-01
The objective of this study was to identify the most reliable approach for prevalence estimation of Mycobacterium avium ssp. paratuberculosis (MAP) infection in clinically healthy slaughtered cattle. Sampling of macroscopically suspect tissue was compared to systematic sampling. Specimens of ileum, jejunum, mesenteric and caecal lymph nodes were examined for MAP infection using bacterial microscopy, culture, histopathology and immunohistochemistry. MAP was found most frequently in caecal lymph nodes, but sampling more tissues optimized the detection rate. Examination by culture was most efficient while combination with histopathology increased the detection rate slightly. MAP was detected in 49/50 animals with macroscopic lesions representing 1.35% of the slaughtered cattle examined. Of 150 systematically sampled macroscopically non-suspect cows, 28.7% were infected with MAP. This indicates that the majority of MAP-positive cattle are slaughtered without evidence of macroscopic lesions and before clinical signs occur. For reliable prevalence estimation of MAP infection in slaughtered cattle, systematic random sampling is essential.
Moore, Richard Bridge; Johnston, Craig M.; Robinson, Keith W.; Deacon, Jeffrey R.
2004-01-01
phosphorus model include discharges for municipal wastewater-treatment facilities and pulp and paper facilities, developed land area, agricultural area, and forested area. For total phosphorus, loss rates were significant for reservoirs with surface areas of 10 square kilometers or less, and in streams with flows less than or equal to 2.83 cubic meters per second. Applications of SPARROW for evaluating nutrient loading in New England waters include estimates of the spatial distributions of total nitrogen and phosphorus yields, sources of the nutrients, and the potential for delivery of those yields to receiving waters. This information can be used to (1) predict ranges in nutrient levels in surface waters, (2) identify the environmental variables that are statistically significant predictors of nutrient levels in streams, (3) evaluate monitoring efforts for better determination of nutrient loads, and (4) evaluate management options for reducing nutrient loads to achieve water-quality goals.
DEFF Research Database (Denmark)
Witgen, Brent Marvin; Grady, M. Sean; Nyengaard, Jens Randel
2006-01-01
The quantification of ultrastructure has been permanently improved by the application of new stereological principles. Both precision and efficiency have been enhanced. Here we report for the first time a fractionator method that can be applied at the electron microscopy level. This new design...... the total object number using section sampling fractions based on the average thickness of sections of variable thicknesses. As an alternative, this approach estimates the correct particle section sampling probability based on an estimator of the Horvitz-Thompson type, resulting in a theoretically more...
Critical point relascope sampling for unbiased volume estimation of downed coarse woody debris
Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey; Mark J. Ducey
2005-01-01
Critical point relascope sampling is developed and shown to be design-unbiased for the estimation of log volume when used with point relascope sampling for downed coarse woody debris. The method is closely related to critical height sampling for standing trees when trees are first sampled with a wedge prism. Three alternative protocols for determining the critical...
A sampling strategy for estimating plot average annual fluxes of chemical elements from forest soils
Brus, D.J.; Gruijter, de J.J.; Vries, de W.
2010-01-01
A sampling strategy for estimating spatially averaged annual element leaching fluxes from forest soils is presented and tested in three Dutch forest monitoring plots. In this method sampling locations and times (days) are selected by probability sampling. Sampling locations were selected by
MRI estimation of total renal volume demonstrates significant association with healthy donor weight
International Nuclear Information System (INIS)
Cohen, Emil I.; Kelly, Sarah A.; Edye, Michael; Mitty, Harold A.; Bromberg, Jonathan S.
2009-01-01
Purpose: The purpose of this study was to correlate total renal volume (TRV) calculations, obtained through the voxel-count method and ellipsoid formula with various physical characteristics. Materials and methods: MRI reports and physical examination from 210 healthy kidney donors (420 kidneys), on whom renal volumes were obtained using the voxel-count method, were retrospectively reviewed. These values along with ones obtained through a more traditional method (ellipsoid formula) were correlated with subject height, body weight, body mass index (BMI), and age. Results: TRV correlated strongly with body weight (r = 0.7) and to a lesser degree with height, age, or BMI (r = 0.5, -0.2, 0.3, respectively). The left kidney volume was greater than the right, on average (p < 0.001). The ellipsoid formula method over-estimated renal volume by 17% on average which was significant (p < 0.001). Conclusions: Body weight was the physical characteristic which demonstrated the strongest correlation with renal volume in healthy subjects. Given this finding, a formula was derived for estimating the TRV for a given patient based on the his or her weight: TRV = 2.96 x weight (kg) + 113 ± 64.
Froehle, Andrew W; Schoeninger, Margaret J
2006-12-01
We conducted a meta-analysis of 45 studies reporting basal metabolic rate (BMR) data for Homo sapiens and Pan troglodytes to determine the effects of sex, age, and latitude (a proxy for climate, in humans only). BMR was normalized for body size using fat-free mass in humans and body mass in chimpanzees. We found no effect of sex in either species and no age effect in chimpanzees. In humans, juveniles differed significantly from adults (ANCOVA: P BMR and body size, and used them to predict total daily energy expenditure (TEE) in four early hominin species. Our predictions concur with previous TEE estimates (i.e. Leonard and Robertson: Am J Phys Anthropol 102 (1997) 265-281), and support the conclusion that TEE increased greatly with H. erectus. Our results show that intraspecific variation in BMR does not affect TEE estimates for interspecific comparisons. Comparisons of more closely related groups such as humans and Neandertals, however, may benefit from consideration of this variation. 2006 Wiley-Liss, Inc.
A "total parameter estimation" method in the varification of distributed hydrological models
Wang, M.; Qin, D.; Wang, H.
2011-12-01
Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in
Akutsu, K; Takatori, S; Nakazawa, H; Hayakawa, K; Izumi, S; Makino, T
2008-01-01
This study presents the results of a total diet study performed for estimating the dietary intake of polybrominated diphenyl ethers (PBDEs) in Osaka, Japan. The concentrations of 36 PBDEs were measured in samples from 14 food groups (Groups I-XIV). PBDEs were detected only in Groups IV (oils and fats), V (legumes and their products), X (fish, shellfish, and their products), and XI (meat and eggs) at concentrations of 1.8, 0.03, 0.48, and 0.01 ng g⁻¹, respectively. For an average person, the lower bound dietary intakes of penta- and deca-formulations were estimated to be 46 and 21 ng day⁻¹, respectively. A high proportion of the decabrominated congener (DeBDE-209) was observed in Group IV. To confirm the presence of DeBDE-209 in vegetable oils, an additional analysis was performed using 18 vegetable oil samples. Of these, seven contained ng g⁻¹ levels of DeBDE-209.
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
DEFF Research Database (Denmark)
Scheike, Thomas; Juul, Anders
2004-01-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazard...
Critical length sampling: a method to estimate the volume of downed coarse woody debris
G& #246; ran St& #229; hl; Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey
2010-01-01
In this paper, critical length sampling for estimating the volume of downed coarse woody debris is presented. Using this method, the volume of downed wood in a stand can be estimated by summing the critical lengths of down logs included in a sample obtained using a relascope or wedge prism; typically, the instrument should be tilted 90° from its usual...
Per tree estimates with n-tree distance sampling: an application to increment core data
Thomas B. Lynch; Robert F. Wittwer
2002-01-01
Per tree estimates using the n trees nearest a point can be obtained by using a ratio of per unit area estimates from n-tree distance sampling. This ratio was used to estimate average age by d.b.h. classes for cottonwood trees (Populus deltoides Bartr. ex Marsh.) on the Cimarron National Grassland. Increment...
Energy Technology Data Exchange (ETDEWEB)
Mazonakis, Michalis; Berris, Theoharris; Damilakis, John [Department of Medical Physics, Faculty of Medicine, University of Crete, P.O. Box 2208, 71003 Iraklion, Crete (Greece); Lyraraki, Efrossyni [Department of Radiotherapy and Oncology, University Hospital of Iraklion, 71110 Iraklion, Crete (Greece)
2013-10-15
Purpose: Heterotopic ossification (HO) is a frequent complication following total hip arthroplasty. This study was conducted to calculate the radiation dose to organs-at-risk and estimate the probability of cancer induction from radiotherapy for HO prophylaxis.Methods: Hip irradiation for HO with a 6 MV photon beam was simulated with the aid of a Monte Carlo model. A realistic humanoid phantom representing an average adult patient was implemented in Monte Carlo environment for dosimetric calculations. The average out-of-field radiation dose to stomach, liver, lung, prostate, bladder, thyroid, breast, uterus, and ovary was calculated. The organ-equivalent-dose to colon, that was partly included within the treatment field, was also determined. Organ dose calculations were carried out using three different field sizes. The dependence of organ doses upon the block insertion into primary beam for shielding colon and prosthesis was investigated. The lifetime attributable risk for cancer development was estimated using organ, age, and gender-specific risk coefficients.Results: For a typical target dose of 7 Gy, organ doses varied from 1.0 to 741.1 mGy by the field dimensions and organ location relative to the field edge. Blocked field irradiations resulted in a dose range of 1.4–146.3 mGy. The most probable detriment from open field treatment of male patients was colon cancer with a high risk of 564.3 × 10{sup −5} to 837.4 × 10{sup −5} depending upon the organ dose magnitude and the patient's age. The corresponding colon cancer risk for female patients was (372.2–541.0) × 10{sup −5}. The probability of bladder cancer development was more than 113.7 × 10{sup −5} and 110.3 × 10{sup −5} for males and females, respectively. The cancer risk range to other individual organs was reduced to (0.003–68.5) × 10{sup −5}.Conclusions: The risk for cancer induction from radiation therapy for HO prophylaxis after total hip arthroplasty varies considerably by
International Nuclear Information System (INIS)
Mazonakis, Michalis; Berris, Theoharris; Damilakis, John; Lyraraki, Efrossyni
2013-01-01
Purpose: Heterotopic ossification (HO) is a frequent complication following total hip arthroplasty. This study was conducted to calculate the radiation dose to organs-at-risk and estimate the probability of cancer induction from radiotherapy for HO prophylaxis.Methods: Hip irradiation for HO with a 6 MV photon beam was simulated with the aid of a Monte Carlo model. A realistic humanoid phantom representing an average adult patient was implemented in Monte Carlo environment for dosimetric calculations. The average out-of-field radiation dose to stomach, liver, lung, prostate, bladder, thyroid, breast, uterus, and ovary was calculated. The organ-equivalent-dose to colon, that was partly included within the treatment field, was also determined. Organ dose calculations were carried out using three different field sizes. The dependence of organ doses upon the block insertion into primary beam for shielding colon and prosthesis was investigated. The lifetime attributable risk for cancer development was estimated using organ, age, and gender-specific risk coefficients.Results: For a typical target dose of 7 Gy, organ doses varied from 1.0 to 741.1 mGy by the field dimensions and organ location relative to the field edge. Blocked field irradiations resulted in a dose range of 1.4–146.3 mGy. The most probable detriment from open field treatment of male patients was colon cancer with a high risk of 564.3 × 10 −5 to 837.4 × 10 −5 depending upon the organ dose magnitude and the patient's age. The corresponding colon cancer risk for female patients was (372.2–541.0) × 10 −5 . The probability of bladder cancer development was more than 113.7 × 10 −5 and 110.3 × 10 −5 for males and females, respectively. The cancer risk range to other individual organs was reduced to (0.003–68.5) × 10 −5 .Conclusions: The risk for cancer induction from radiation therapy for HO prophylaxis after total hip arthroplasty varies considerably by the treatment parameters, organ
Replication Variance Estimation under Two-phase Sampling in the Presence of Non-response
Directory of Open Access Journals (Sweden)
Muqaddas Javed
2014-09-01
Full Text Available Kim and Yu (2011 discussed replication variance estimator for two-phase stratified sampling. In this paper estimators for mean have been proposed in two-phase stratified sampling for different situation of existence of non-response at first phase and second phase. The expressions of variances of these estimators have been derived. Furthermore, replication-based jackknife variance estimators of these variances have also been derived. Simulation study has been conducted to investigate the performance of the suggested estimators.
DEFF Research Database (Denmark)
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
examined, which in turn leads to any of the known stereological estimates, including size distributions and spatial distributions. The unbiasedness is not a function of the assumed relation between the weight and the structure, which is in practice always a biased relation from a stereological (integral......, the desired number of fields are sampled automatically with probability proportional to the weight and presented to the expert observer. Using any known stereological probe and estimator, the correct count in these fields leads to a simple, unbiased estimate of the total amount of structure in the sections...... geometric) point of view. The efficiency of the proportionator depends, however, directly on this relation to be positive. The sampling and estimation procedure is simulated in sections with characteristics and various kinds of noises in possibly realistic ranges. In all cases examined, the proportionator...
Sample preparation for total reflection X-ray fluorescence analysis using resist pattern technique
Tsuji, K.; Yomogita, N.; Konyuba, Y.
2018-06-01
A circular resist pattern layer with a diameter of 9 mm was prepared on a glass substrate (26 mm × 76 mm; 1.5 mm thick) for total reflection X-ray fluorescence (TXRF) analysis. The parallel cross pattern was designed with a wall thickness of 10 μm, an interval of 20 μm, and a height of 1.4 or 0.8 μm. This additional resist layer did not significantly increase background intensity on the XRF peaks in TXRF spectra. Dotted residue was obtained from a standard solution (10 μL) containing Ti, Cr, Ni, Pb, and Ga, each at a final concentration of 10 ppm, on a normal glass substrate with a silicone coating layer. The height of the residue was more than 100 μm, where self-absorption in the large residue affected TXRF quantification (intensity relative standard deviation (RSD): 12-20%). In contrast, from a droplet composed of a small volume of solution dropped and cast on the resist pattern structure, the obtained residue was not completely film but a film-like residue with a thickness less than 1 μm, where self-absorption was not a serious problem. In the end, this sample preparation was demonstrated to improve TXRF quantification (intensity RSD: 2-4%).
Estimation of radon concentration in soil and groundwater samples of Northern Rajasthan, India
Directory of Open Access Journals (Sweden)
Sudhir Mittal
2016-04-01
Full Text Available In the present investigation, analysis of radon concentration in 20 water and soil samples collected from different locations of Bikaner and Jhunjhunu districts of Rajasthan, India has been carried out by using RAD7 an electronic Radon detector. The measured radon concentration in water samples lies in the range from 0.50 to 22 Bq l−1 with the mean value of 4.42 Bq l−1, which lies within the safe limit from 4 to 40 Bq l−1 recommended by United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR, 2008. The total annual effective dose estimated due to radon concentration in water ranges from 1.37 to 60.06 μSV y−1 with the mean value of 12.08 μSV y−1, which is lower than the safe limit 0.1 mSv y−1 as set by World Health Organization (WHO, 2004 and European Council (EU, 1998. Radon measurement in soil samples varies from 941 to 10,050 Bq m−3 with the mean value of 4561 Bq m−3, which lies within the range reported by other investigators. It was observed that the soil and water of Bikaner and Jhunjhunu districts are suitable for drinking and construction purpose without posing any health hazard.
Gray bootstrap method for estimating frequency-varying random vibration signals with small samples
Directory of Open Access Journals (Sweden)
Wang Yanqing
2014-04-01
Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.
Dual-joint modeling for estimation of total knee replacement contact forces during locomotion.
Hast, Michael W; Piazza, Stephen J
2013-02-01
Model-based estimation of in vivo contact forces arising between components of a total knee replacement is challenging because such forces depend upon accurate modeling of muscles, tendons, ligaments, contact, and multibody dynamics. Here we describe an approach to solving this problem with results that are tested by comparison to knee loads measured in vivo for a single subject and made available through the Grand Challenge Competition to Predict in vivo Tibiofemoral Loads. The approach makes use of a "dual-joint" paradigm in which the knee joint is alternately represented by (1) a ball-joint knee for inverse dynamic computation of required muscle controls and (2) a 12 degree-of-freedom (DOF) knee with elastic foundation contact at the tibiofemoral and patellofemoral articulations for forward dynamic integration. Measured external forces and kinematics were applied as a feedback controller and static optimization attempted to track measured knee flexion angles and electromyographic (EMG) activity. The resulting simulations showed excellent tracking of knee flexion (average RMS error of 2.53 deg) and EMG (muscle activations within ±10% envelopes of normalized measured EMG signals). Simulated tibiofemoral contact forces agreed qualitatively with measured contact forces, but their RMS errors were approximately 25% of the peak measured values. These results demonstrate the potential of a dual-joint modeling approach to predict joint contact forces from kinesiological data measured in the motion laboratory. It is anticipated that errors in the estimation of contact force will be reduced as more accurate subject-specific models of muscles and other soft tissues are developed.
Comparative methane estimation from cattle based on total CO2 production using different techniques
Directory of Open Access Journals (Sweden)
Md N. Haque
2017-06-01
Full Text Available The objective of this study was to compare the precision of CH4 estimates using calculated CO2 (HP by the CO2 method (CO2T and measured CO2 in the respiration chamber (CO2R. The CO2R and CO2T study was conducted as a 3 × 3 Latin square design where 3 Dexter heifers were allocated to metabolic cages for 3 periods. Each period consisted of 2 weeks of adaptation followed by 1 week of measurement with the CO2R and CO2T. The average body weight of the heifer was 226 ± 11 kg (means ± SD. They were fed a total mixed ration, twice daily, with 1 of 3 supplements: wheat (W, molasses (M, or molasses mixed with sodium bicarbonate (Mbic. The dry mater intake (DMI; kg/day was significantly greater (P < 0.001 in the metabolic cage compared with that in the respiration chamber. The daily CH4 (L/day emission was strongly correlated (r = 0.78 between CO2T and CO2R. The daily CH4 (L/kg DMI emission by the CO2T was in the same magnitude as by the CO2R. The measured CO2 (L/day production in the respiration chamber was not different (P = 0.39 from the calculated CO2 production using the CO2T. This result concludes a reasonable accuracy and precision of CH4 estimation by the CO2T compared with the CO2R.
Total cost estimates for large-scale wind scenarios in UK
International Nuclear Information System (INIS)
Dale, Lewis; Milborrow, David; Slark, Richard; Strbac, Goran
2004-01-01
The recent UK Energy White Paper suggested that the Government should aim to secure 20% of electricity from renewable sources by 2020. A number of estimates of the extra cost of such a commitment have been made, but these have not necessarily included all the relevant cost components. This analysis sets out to identify these and to calculate the extra cost to the electricity consumer, assuming all the renewable electricity is sourced from wind energy. This enables one of the more controversial issues--the implications of wind intermittency--to be addressed. The basis of the assumptions associated with generating costs, extra balancing costs and distribution and transmission system reinforcement costs are all clearly identified and the total costs of a '20% wind' scenario are compared with a scenario where a similar amount of energy is generated by gas-fired plant. This enables the extra costs of the renewables scenario to be determined. The central estimate of the extra costs to electricity consumers is just over 0.3 p/kW h in current prices (around 5% extra on average domestic unit prices). Sensitivity analyses examine the implications of differing assumptions. The extra cost would rise if the capital costs of wind generation fall slower than anticipated, but would fall if gas prices rise more rapidly than has been assumed, or if wind plant are more productive. Even if it is assumed that wind has no capacity displacement value, the added cost to the electricity consumer rises by less than 0.1 p/kW h. It is concluded that there does not appear to be any technical reason why a substantial proportion of the country's electricity requirements could not be delivered by wind
Rus, David L.; Patton, Charles J.; Mueller, David K.; Crawford, Charles G.
2013-01-01
percent. However, because particulate nitrogen constituted only 14 percent, on average, of TN-C, the precision of the TN-C method approached that of the method for dissolved nitrogen (2.3 percent). On the other hand, total Kjeldahl nitrogen (having a variability of 7.6 percent) constituted an average of 40 percent of TN-K, suggesting that the reduced precision of the Kjeldahl digestion may affect precision of the TN-K estimates. For most samples, the precision of TN computed as TN-C would be better (lower variability) than the precision of TN-K. In general, TN-A precision (having a variability of 2.1 percent) was superior to TN-C and TN-K methods. The laboratory experiment indicated that negative bias in TN-A was present across the entire range of sediment concentration and increased as sediment concentration increased. This suggested that reagent limitation was not the predominant cause of observed bias in TN-A. Furthermore, analyses of particulate nitrogen present in digest residues provided an almost complete accounting for the nitrogen that was underestimated by alkaline-persulfate digestion. This experiment established that, for the reference materials at least, negative bias in TN-A was caused primarily by the sequestration of some particulate nitrogen that was refractory to the digestion process. TN-K biases varied between positive and negative values in the laboratory experiment. Positive bias in TN-K is likely the result of the unintended reduction of a small and variable amount of nitrate to ammonia during the Kjeldahl digestion process. Negative TN-K bias may be the result of the sequestration of a portion of particulate nitrogen during the digestion process. Negative bias in TN-A was present across the entire range of suspended-sediment concentration (1 to 14,700 milligrams per liter [mg/L]) in the synoptic-field study, with relative bias being nearly as great at sediment concentrations below 10 mg/L (median of -3.5 percent) as that observed at sediment
DEFF Research Database (Denmark)
Nielsen, Morten Ø.; Frederiksen, Per Houmann
2005-01-01
In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods. The es...... the time domain parametric methods, and (4) without sufficient trimming of scales the wavelet-based estimators are heavily biased.......In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods....... The estimators are briefly introduced and compared, and the criteria adopted for measuring finite sample performance are bias and root mean squared error. Most importantly, the simulations reveal that (1) the frequency domain maximum likelihood procedure is superior to the time domain parametric methods, (2) all...
Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach
Ballal, Tarig
2014-01-01
This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.
Chang, Fi-John; Chen, Pin-An; Chang, Li-Chiu; Tsai, Yu-Hsuan
2016-08-15
This study attempts to model the spatio-temporal dynamics of total phosphate (TP) concentrations along a river for effective hydro-environmental management. We propose a systematical modeling scheme (SMS), which is an ingenious modeling process equipped with a dynamic neural network and three refined statistical methods, for reliably predicting the TP concentrations along a river simultaneously. Two different types of artificial neural network (BPNN-static neural network; NARX network-dynamic neural network) are constructed in modeling the dynamic system. The Dahan River in Taiwan is used as a study case, where ten-year seasonal water quality data collected at seven monitoring stations along the river are used for model training and validation. Results demonstrate that the NARX network can suitably capture the important dynamic features and remarkably outperforms the BPNN model, and the SMS can effectively identify key input factors, suitably overcome data scarcity, significantly increase model reliability, satisfactorily estimate site-specific TP concentration at seven monitoring stations simultaneously, and adequately reconstruct seasonal TP data into a monthly scale. The proposed SMS can reliably model the dynamic spatio-temporal water pollution variation in a river system for missing, hazardous or costly data of interest. Copyright © 2016 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Falkenberg, G.; Pepponi, G.; Streli, C.; Wobrauschek, P.
2003-01-01
X-ray absorption fine structure (XAFS) experiments in fluorescence mode have been performed in total reflection excitation geometry and conventional 45 deg. /45 deg. excitation/detection geometry for comparison. The experimental results have shown that XAFS measurements are feasible under normal total reflection X-ray fluorescence (TXRF) conditions, i.e. on droplet samples, with excitation in grazing incidence and using a TXRF experimental chamber. The application of the total reflection excitation geometry for XAFS measurements increases the sensitivity compared to the conventional geometry leading to lower accessible concentration ranges. However, XAFS under total reflection excitation condition fails for highly concentrated samples because of the self-absorption effect
40 CFR 1065.595 - PM sample post-conditioning and total weighing.
2010-07-01
... minutes before weighing. Note that 400 µg on sample media (e.g., filters) is an approximate net mass of 0... the procedures in § 1065.590(f) through (i) to determine post-test mass of the sample media (e.g., filters). (g) Subtract each buoyancy-corrected tare mass of the sample medium (e.g., filter) from its...
Oceanic uptake of CO2 re-estimated through δ13C in WOCE samples
International Nuclear Information System (INIS)
Lerperger, Michael; McNichol, A.P.; Peden, J.; Gagnon, A.R.; Elder, K.L.; Kutschera, W.; Rom, W.; Steier, P.
2000-01-01
In addition to 14 C, a large set of δ 13 C data was produced at NOSAMS as part of the World ocean circulation experiment (WOCE). In this paper, a subset of 973 δ 13 C results from 63 stations in the Pacific Ocean was compared to a total number of 219 corresponding results from 12 stations sampled during oceanographic programs in the early 1970s. The data were analyzed in light of recent work to estimate the uptake of CO 2 derived from fossil fuel and biomass burning in the oceans by quantifying the δ 13 C Suess effect in the oceans. In principle, the δ 13 C value of dissolved inorganic carbon (DIC) allows a quantitative estimate of how much of the anthropogenic CO 2 released into the atmosphere is taken up by the oceans, because the δ 13 C of CO 2 derived from organic matter (∼2.7 percent) is significantly different from that of the atmosphere (∼0.8 percent). Our new analysis indicates an apparent discrepancy between the old and the new data sets, possibly caused by a constant offset in δ 13 C values in a subset of the data. A similar offset was reported in an earlier work by Paul Quay et al. for one station that was not included in their final analysis. We present an estimate for this assumed offset based on data from water depths below which little or no change in δ 13 C over time would be expected. Such a correction leads to a significantly reduced estimate of the CO 2 uptake, possibly as low as one half of the amount of 2.1 GtC yr -1 (gigatons carbon per year) estimated previously. The present conclusion is based on a comparison with a relatively small data set from the 70s in the Pacific Ocean. The larger data set collected during the GEOSECS program was not used because of problems reported with the data. This work suggests there may also be problems in comparing non-GEOSECS data from the 1970s to the current data. The calculation of significantly lower uptake estimates based on an offset-related problem appears valid, but the exact figures are
International Nuclear Information System (INIS)
Santana, L V; Sarkis, J E S; Ulrich, J C; Hortellani, M A
2015-01-01
We provide an uncertainty estimates for homogeneity and stability studies of reference material used in proficiency test for determination of total mercury in fish fresh muscle tissue. Stability was estimated by linear regression and homogeneity by ANOVA. The results indicate that the reference material is both homogeneous and chemically stable over the short term. Total mercury concentration of the muscle tissue, with expanded uncertainty, was 0.294 ± 0.089 μg g −1
Hearty, Thomas J.; Savtchenko, Andrey K.; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In
2014-01-01
We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be +/- 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and greater than 30% dry over mid-latitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.
Pandey, Pallavi; Reddy, N Venugopal; Rao, V Arun Prasad; Saxena, Aditya; Chaudhary, C P
2015-03-01
The aim of the study was to evaluate salivary flow rate, pH, buffering capacity, calcium, total protein content and total antioxidant capacity in relation to dental caries, age and gender. The study population consisted of 120 healthy children aged 7-15 years that was further divided into two groups: 7-10 years and 11-15 years. In this 60 children with DMFS/dfs = 0 and 60 children with DMFS/dfs ≥5 were included. The subjects were divided into two groups; Group A: Children with DMFS/dfs = 0 (caries-free) Group B: Children with DMFS/dfs ≥5 (caries active). Unstimulated saliva samples were collected from all groups. Flow rates were determined, and samples analyzed for pH, buffer capacity, calcium, total protein and total antioxidant status. Salivary antioxidant activity is measured with spectrophotometer by an adaptation of 2,2'-azino-di-(3-ethylbenzthiazoline-6-sulphonate) assays. The mean difference of the two groups; caries-free and caries active were proved to be statistically significant (P salivary calcium, total protein and total antioxidant level for both the sexes in the age group 7-10 years and for the age 11-15 years the mean difference of the two groups were proved to be statistically significant (P salivary calcium level for both the sexes. Salivary total protein and total antioxidant level were proved to be statistically significant for male children only. In general, total protein and total antioxidants in saliva were increased with caries activity. Calcium content of saliva was found to be more in caries-free group and increased with age.
Crowley, Stephanie J; Suh, Christina; Molina, Thomas A; Fogg, Louis F; Sharkey, Katherine M; Carskadon, Mary A
2016-04-01
Circadian rhythm sleep-wake disorders (CRSWDs) often manifest during the adolescent years. Measurement of circadian phase such as the dim light melatonin onset (DLMO) improves diagnosis and treatment of these disorders, but financial and time costs limit the use of DLMO phase assessments in clinic. The current analysis aims to inform a cost-effective and efficient protocol to measure the DLMO in older adolescents by reducing the number of samples and total sampling duration. A total of 66 healthy adolescents (26 males) aged 14.8-17.8 years participated in a study; they were required to sleep on a fixed baseline schedule for a week before which they visited the laboratory for saliva collection in dim light (<20 lux). Two partial 6-h salivary melatonin profiles were derived for each participant. Both profiles began 5 h before bedtime and ended 1 h after bedtime, but one profile was derived from samples taken every 30 min (13 samples) and the other from samples taken every 60 min (seven samples). Three standard thresholds (first three melatonin values mean + 2 SDs, 3 pg/mL, and 4 pg/mL) were used to compute the DLMO. An agreement between DLMOs derived from 30-min and 60-min sampling rates was determined using Bland-Altman analysis; agreement between the sampling rate DLMOs was defined as ± 1 h. Within a 6-h sampling window, 60-min sampling provided DLMO estimates within ± 1 h of DLMO from 30-min sampling, but only when an absolute threshold (3 or 4 pg/mL) was used to compute the DLMO. Future analyses should be extended to include adolescents with CRSWDs. Copyright © 2016 Elsevier B.V. All rights reserved.
Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A
2015-01-01
Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.
Giovannelli, Justin; Curran, Emily
2017-02-01
Issue: Policymakers have sought to improve the shopping experience on the Affordable Care Act’s marketplaces by offering decision support tools that help consumers better understand and compare their health plan options. Cost estimators are one such tool. They are designed to provide consumers a personalized estimate of the total cost--premium, minus subsidy, plus cost-sharing--of their coverage options. Cost estimators were available in most states by the start of the fourth open enrollment period. Goal: To understand the experiences of marketplaces that offer a total cost estimator and the interests and concerns of policymakers from states that are not using them. Methods: Structured interviews with marketplace officials, consumer enrollment assisters, technology vendors, and subject matter experts; analysis of the total cost estimators available on the marketplaces as of October 2016. Key findings and conclusions: Informants strongly supported marketplace adoption of a total cost estimator. Marketplaces that offer an estimator faced a range of design choices and varied significantly in their approaches to resolving them. Interviews suggested a clear need for additional consumer testing and data analysis of tool usage and for sustained outreach to enrollment assisters to encourage greater use of the estimators.
International Nuclear Information System (INIS)
Horvat, M.; Liang, L.; Mandic, V.
1995-01-01
The programme of this CRP is focused on the analyses of human hair samples. There are only two human hair samples certified for total mercury, and no RMs for methylmercury compounds is available. One of the main objectives of this CRP is to produce, through the IAEA AQCS Programme, a human hair intercomparison material for quality assurance requirements in population monitoring programmes for total and methylmercury exposure. Through the reporting period, MESL has introduced a new method for simultaneous determination of total and methylmercury in biological samples. As the laboratory has close collaboration with the CRP's Reference Laboratory in Ljubljana, Slovenia, it has also been actively involved in the quality assurance component of this CRP. This report represents a summary on the results for total and methylmercury in two intercomparison samples, IAEA-085 and IAEA-086 using newly developed method
Interval estimation methods of the mean in small sample situation and the results' comparison
International Nuclear Information System (INIS)
Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen
2009-01-01
The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)
Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers
National Oceanic and Atmospheric Administration, Department of Commerce — Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers and Improve Data-Limited Stock Assessments. This biosampling project...
National Research Council Canada - National Science Library
Erwin, R. S; Bernstein, Dennis S
2005-01-01
.... In this paper we use a sampled-data extended Kalman Filter to estimate the trajectory or a target satellite when only range measurements are available from a constellation or orbiting spacecraft...
Cheung, Chi Yuen; van der Heijden, Jaques; Hoogtanders, Karin; Christiaans, Maarten; Liu, Yan Lun; Chan, Yiu Han; Choi, Koon Shing; van de Plas, Afke; Shek, Chi Chung; Chau, Ka Foon; Li, Chun Sang; van Hooff, Johannes; Stolk, Leo
2008-02-01
Dried blood spot (DBS) sampling and high-performance liquid chromatography tandem-mass spectrometry have been developed in monitoring tacrolimus levels. Our center favors the use of limited sampling strategy and abbreviated formula to estimate the area under concentration-time curve (AUC(0-12)). However, it is inconvenient for patients because they have to wait in the center for blood sampling. We investigated the application of DBS method in tacrolimus level monitoring using limited sampling strategy and abbreviated AUC estimation approach. Duplicate venous samples were obtained at each time point (C(0), C(2), and C(4)). To determine the stability of blood samples, one venous sample was sent to our laboratory immediately. The other duplicate venous samples, together with simultaneous fingerprick blood samples, were sent to the University of Maastricht in the Netherlands. Thirty six patients were recruited and 108 sets of blood samples were collected. There was a highly significant relationship between AUC(0-12), estimated from venous blood samples, and fingerprick blood samples (r(2) = 0.96, P AUC(0-12) strategy as drug monitoring.
International Nuclear Information System (INIS)
Sharma, P.; Sharma, G.N.; Shrivastava, B.; Jadhav, H.R.
2014-01-01
The aim of present work was to investigate antioxidant potential of different extracts of Blainvillea acmella leaf and stem. The successive extraction of individual plant part was carried out using solvents of different polarity viz. n-hexane, ethyl acetate, methanol and water. Preliminary phyto chemical screening of all the extracts was done. The present total phenolic contents were estimated by Folin-Ciocalteu reagent method and expressed as μg/ mg of gallic acid equivalent. The antioxidant potential and reducing power of all the prepared extracts were measured against DPPH, as compared to standard ascorbic acid, and BHA respectively. The result data indicate that the phenolic contents were higher in methanolic extracts of leaf (73.67 ± 0.38 mg/ g) followed by ethyl acetate (29.08 ± 0.38 mg/ g), aqueous (21.50 ± 0.28 mg/ g), and n-Hexane (9.29 ± 0.38 mg/ g); gallic acid equivalent. The similar pattern in stem part was also observed for example methanolic extracts (41.90 ± 0.45 mg/ g), ethyl acetate (21.92 ± 0.28 mg/ g), aqueous (15.13 ± 0.18 mg/ g), and n-Hexane (3.69 ± 0.28 mg/ g). The antioxidant capacity of methanolic extract of both the part for example leaf and stem was found to be maximum, as IC50 values were 226.49 ± 0.16, 402.05 ± 1.10 respectively. The reducing power was also highest in methanol extract of both parts. The result data conclude that the higher antioxidant as well as reducing power may be due to present phenolic contents. (author)
Estimation of Sensitive Proportion by Randomized Response Data in Successive Sampling
Directory of Open Access Journals (Sweden)
Bo Yu
2015-01-01
Full Text Available This paper considers the problem of estimation for binomial proportions of sensitive or stigmatizing attributes in the population of interest. Randomized response techniques are suggested for protecting the privacy of respondents and reducing the response bias while eliciting information on sensitive attributes. In many sensitive question surveys, the same population is often sampled repeatedly on each occasion. In this paper, we apply successive sampling scheme to improve the estimation of the sensitive proportion on current occasion.
Wolf, Michael
2012-01-01
A document describes an algorithm created to estimate the mass placed on a sample verification sensor (SVS) designed for lunar or planetary robotic sample return missions. A novel SVS measures the capacitance between a rigid bottom plate and an elastic top membrane in seven locations. As additional sample material (soil and/or small rocks) is placed on the top membrane, the deformation of the membrane increases the capacitance. The mass estimation algorithm addresses both the calibration of each SVS channel, and also addresses how to combine the capacitances read from each of the seven channels into a single mass estimate. The probabilistic approach combines the channels according to the variance observed during the training phase, and provides not only the mass estimate, but also a value for the certainty of the estimate. SVS capacitance data is collected for known masses under a wide variety of possible loading scenarios, though in all cases, the distribution of sample within the canister is expected to be approximately uniform. A capacitance-vs-mass curve is fitted to this data, and is subsequently used to determine the mass estimate for the single channel s capacitance reading during the measurement phase. This results in seven different mass estimates, one for each SVS channel. Moreover, the variance of the calibration data is used to place a Gaussian probability distribution function (pdf) around this mass estimate. To blend these seven estimates, the seven pdfs are combined into a single Gaussian distribution function, providing the final mean and variance of the estimate. This blending technique essentially takes the final estimate as an average of the estimates of the seven channels, weighted by the inverse of the channel s variance.
Turbidity-controlled suspended sediment sampling for runoff-event load estimation
Jack Lewis
1996-01-01
Abstract - For estimating suspended sediment concentration (SSC) in rivers, turbidity is generally a much better predictor than water discharge. Although it is now possible to collect continuous turbidity data even at remote sites, sediment sampling and load estimation are still conventionally based on discharge. With frequent calibration the relation of turbidity to...
Baysian estimation of P(X > x) from a small sample of Gaussian data
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager
2017-01-01
The classical statistical uncertainty problem of estimation of upper tail probabilities on the basis of a small sample of observations of a Gaussian random variable is considered. Predictive posterior estimation is discussed, adopting the standard statistical model with diffuse priors of the two...
Estimating time to pregnancy from current durations in a cross-sectional sample
DEFF Research Database (Denmark)
Keiding, Niels; Kvist, Kajsa; Hartvig, Helle
2002-01-01
A new design for estimating the distribution of time to pregnancy is proposed and investigated. The design is based on recording current durations in a cross-sectional sample of women, leading to statistical problems similar to estimating renewal time distributions from backward recurrence times....
Bridging the gaps between non-invasive genetic sampling and population parameter estimation
Francesca Marucco; Luigi Boitani; Daniel H. Pletscher; Michael K. Schwartz
2011-01-01
Reliable estimates of population parameters are necessary for effective management and conservation actions. The use of genetic data for captureÂrecapture (CR) analyses has become an important tool to estimate population parameters for elusive species. Strong emphasis has been placed on the genetic analysis of non-invasive samples, or on the CR analysis; however,...
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-01-01
Background In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. Methods In this paper, we propose to improve the existing literature in ...
An analytical protocol for the determination of total mercury concentrations in solid peat samples
DEFF Research Database (Denmark)
Roos-Barraclough, F; Givelet, N; Martinez-Cortizas, A
2002-01-01
Traditional peat sample preparation methods such as drying at high temperatures and milling may be unsuitable for Hg concentration determination in peats due to the possible presence of volatile Hg species, which could be lost during drying. Here, the effects of sample preparation and natural.......12 and 8.52 ng kg(-1) h(-1), respectively). Fertilising the peat slightly increased Hg loss (3.08 ng kg(-1) h(-1) in NPK-fertilised peat compared to 0.28 ng kg(-1) h(-1) in unfertilised peat, when averaged over all temperatures used). Homogenising samples by grinding in a machine also caused a loss of Hg....... A comparison of two Hg profiles from an Arctic peat core, measured in frozen samples and in air-dried samples, revealed that no Hg losses occurred upon air-drying. A comparison of Hg concentrations in several plant species that make up peat, showed that some species (Pinus mugo, Sphagnum recurvum...
DEFF Research Database (Denmark)
Larsen, Karen B
2017-01-01
abnormal development. Furthermore, many studies of brain cell numbers have employed biased counting methods, whereas innovations in stereology during the past 20-30 years enable reliable and efficient estimates of cell numbers. However, estimates of cell volumes and densities in fetal brain samples...
Tyo, J Scott; LaCasse, Charles F; Ratliff, Bradley M
2009-10-15
Microgrid polarimeters operate by integrating a focal plane array with an array of micropolarizers. The Stokes parameters are estimated by comparing polarization measurements from pixels in a neighborhood around the point of interest. The main drawback is that the measurements used to estimate the Stokes vector are made at different locations, leading to a false polarization signature owing to instantaneous field-of-view (IFOV) errors. We demonstrate for the first time, to our knowledge, that spatially band limited polarization images can be ideally reconstructed with no IFOV error by using a linear system framework.
Estimation of reference intervals from small samples: an example using canine plasma creatinine.
Geffré, A; Braun, J P; Trumel, C; Concordet, D
2009-12-01
According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.
Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model
DEFF Research Database (Denmark)
Kirkegaard, Poul Henning
1993-01-01
Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...
Khaemba, W.; Stein, A.
2002-01-01
Parameter estimates, obtained from airborne surveys of wildlife populations, often have large bias and large standard errors. Sampling error is one of the major causes of this imprecision and the occurrence of many animals in herds violates the common assumptions in traditional sampling designs like
Conditional estimation of exponential random graph models from snowball sampling designs
Pattison, Philippa E.; Robins, Garry L.; Snijders, Tom A. B.; Wang, Peng
2013-01-01
A complete survey of a network in a large population may be prohibitively difficult and costly. So it is important to estimate models for networks using data from various network sampling designs, such as link-tracing designs. We focus here on snowball sampling designs, designs in which the members
Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples
Directory of Open Access Journals (Sweden)
Liu Xin
2015-09-01
Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.
Estimation of AUC or Partial AUC under Test-Result-Dependent Sampling.
Wang, Xiaofei; Ma, Junling; George, Stephen; Zhou, Haibo
2012-01-01
The area under the ROC curve (AUC) and partial area under the ROC curve (pAUC) are summary measures used to assess the accuracy of a biomarker in discriminating true disease status. The standard sampling approach used in biomarker validation studies is often inefficient and costly, especially when ascertaining the true disease status is costly and invasive. To improve efficiency and reduce the cost of biomarker validation studies, we consider a test-result-dependent sampling (TDS) scheme, in which subject selection for determining the disease state is dependent on the result of a biomarker assay. We first estimate the test-result distribution using data arising from the TDS design. With the estimated empirical test-result distribution, we propose consistent nonparametric estimators for AUC and pAUC and establish the asymptotic properties of the proposed estimators. Simulation studies show that the proposed estimators have good finite sample properties and that the TDS design yields more efficient AUC and pAUC estimates than a simple random sampling (SRS) design. A data example based on an ongoing cancer clinical trial is provided to illustrate the TDS design and the proposed estimators. This work can find broad applications in design and analysis of biomarker validation studies.
Ma, Yanyuan
2013-09-01
We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.
Williams, Christopher J.; Moffitt, Christine M.
2003-03-01
An important emerging issue in fisheries biology is the health of free-ranging populations of fish, particularly with respect to the prevalence of certain pathogens. For many years, pathologists focused on captive populations and interest was in the presence or absence of certain pathogens, so it was economically attractive to test pooled samples of fish. Recently, investigators have begun to study individual fish prevalence from pooled samples. Estimation of disease prevalence from pooled samples is straightforward when assay sensitivity and specificity are perfect, but this assumption is unrealistic. Here we illustrate the use of a Bayesian approach for estimating disease prevalence from pooled samples when sensitivity and specificity are not perfect. We also focus on diagnostic plots to monitor the convergence of the Gibbs-sampling-based Bayesian analysis. The methods are illustrated with a sample data set.
Wu, Yiman; Li, Liang
2012-12-18
For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C
Estimates of Inequality Indices Based on Simple Random, Ranked Set, and Systematic Sampling
Bansal, Pooja; Arora, Sangeeta; Mahajan, Kalpana K.
2013-01-01
Gini index, Bonferroni index, and Absolute Lorenz index are some popular indices of inequality showing different features of inequality measurement. In general simple random sampling procedure is commonly used to estimate the inequality indices and their related inference. The key condition that the samples must be drawn via simple random sampling procedure though makes calculations much simpler but this assumption is often violated in practice as the data does not always yield simple random ...
International Nuclear Information System (INIS)
Singh, Sukhpal; Kumar, Ashok; Singh, Charanjeet; Thind, Kulwant Singh; Mudahar, Gurmel S.
2008-01-01
The simultaneous variation of gamma ray buildup factors with absorber thickness (up to 6.5 mfp) and total scatter acceptance angle (which is the sum of incidence and exit beam divergence) in the media of high volume flyash concrete and water was studied experimentally using a point isotropic 137 Cs source
Impact of sampling strategy on stream load estimates in till landscape of the Midwest
Vidon, P.; Hubbard, L.E.; Soyeux, E.
2009-01-01
Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.
Estimation of Total Usual Calcium and Vitamin D Intakes in the United States1–3
Bailey, Regan L.; Dodd, Kevin W.; Goldman, Joseph A.; Gahche, Jaime J.; Dwyer, Johanna T.; Moshfegh, Alanna J.; Sempos, Christopher T.; Picciano, Mary Frances
2010-01-01
Our objective in this study was to estimate calcium intakes from food, water, dietary supplements, and antacids for U.S. citizens aged ≥1 y using NHANES 2003–2006 data and the Dietary Reference Intake panel age groupings. Similar estimates were calculated for vitamin D intake from food and dietary supplements using NHANES 2005–2006. Diet was assessed with 2 24-h recalls; dietary supplement and antacid use were determined by questionnaire. The National Cancer Institute method was used to estim...
Determination of total alpha and beta activities on vegetable samples by LSC
International Nuclear Information System (INIS)
Nogueira, Regina Apolinaria; Santos, Eliane Eugenia dos; Bakker, Alexandre Pereira; Vavassori, Giullia
2011-01-01
Gross alpha and beta analyses are screening techniques used for environmental radioactivity monitoring. The present study proposes to determine the gross alpha and beta activities in vegetable samples by using LSC - liquid scintillation spectrometry. The procedure was applied to vegetable foods. After ashing vegetable samples in a muffle furnace, 100 mg of ash were added to gel mixture of scintillation cocktails, Water - Instagel - Ultima Gold AB (6:10:4) ml, in polyethylene vial. Am-241 standard solution and a KCl (K-40) solution were used to determine the counting configuration, alpha/beta efficiencies and spillover
Optimum sample size to estimate mean parasite abundance in fish parasite surveys
Directory of Open Access Journals (Sweden)
Shvydka S.
2018-03-01
Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.
The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival.
Directory of Open Access Journals (Sweden)
Ziya Kordjazi
Full Text Available Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day and four levels of the number of sampling-days (2, 4, 6 and 7 days. The most parsimonious Cormack-Jolly-Seber (CJS model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery.
The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival
Kordjazi, Ziya; Frusher, Stewart; Buxton, Colin; Gardner, Caleb; Bird, Tomas
2016-01-01
Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters) were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day) and four levels of the number of sampling-days (2, 4, 6 and 7 days). The most parsimonious Cormack-Jolly-Seber (CJS) model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery. PMID:26990561
Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses
Lanfear, Robert; Hua, Xia; Warren, Dan L.
2016-01-01
Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794
Tojo, Axel; Malm, Johan; Marko-Varga, György; Lilja, Hans; Laurell, Thomas
2014-01-01
The antibody microarrays have become widespread, but their use for quantitative analyses in clinical samples has not yet been established. We investigated an immunoassay based on nanoporous silicon antibody microarrays for quantification of total prostate-specific-antigen (PSA) in 80 clinical plasma samples, and provide quantitative data from a duplex microarray assay that simultaneously quantifies free and total PSA in plasma. To further develop the assay the porous silicon chips was placed into a standard 96-well microtiter plate for higher throughput analysis. The samples analyzed by this quantitative microarray were 80 plasma samples obtained from men undergoing clinical PSA testing (dynamic range: 0.14-44ng/ml, LOD: 0.14ng/ml). The second dataset, measuring free PSA (dynamic range: 0.40-74.9ng/ml, LOD: 0.47ng/ml) and total PSA (dynamic range: 0.87-295ng/ml, LOD: 0.76ng/ml), was also obtained from the clinical routine. The reference for the quantification was a commercially available assay, the ProStatus PSA Free/Total DELFIA. In an analysis of 80 plasma samples the microarray platform performs well across the range of total PSA levels. This assay might have the potential to substitute for the large-scale microtiter plate format in diagnostic applications. The duplex assay paves the way for a future quantitative multiplex assay, which analyses several prostate cancer biomarkers simultaneously. PMID:22921878
Determination of total alpha index in samples of see water by coprecipitation method
International Nuclear Information System (INIS)
Suarez-Navarro, J.A.; Pujol, L.; Pozuelo, M.; Pablo, A. de
1998-01-01
An environmental radiological monitoring network in the Spanish sea waters was set up by CEDEX in 1993. Water radioactivity is determined quarterly in eleven sampling points along the Spanish coast. The gross alpha activity is one of the parameters to be determined. The usual method for monitoring the gross alpha activity includes sample evaporation to dryness on a disk and counting using ZnS(Ag) scintillation detector. Nevertheless, the gross alpha activity determination in saline waters, such as sea waters, is troublesome, because mass attenuation is high and a very small of water is needed (0.2 ml). The coprecipitation method allows to analyze 500 ml water samples, so the detection limit is reduced and sensitivity is improved. In this work, the coprecipitation method was used to determine the gross alpha activity in the radiological network of the Spanish coast sea waters during 1996 and 1997. Gross alpha activity was very homogenous. It averaged 0.0844±0.0086 Bq.1''1 and ranged from 0.062 to 0.102 Bq.1''1. In collaboration with CIEMAT a set of samples was analyzed, they averaged 0.0689±0.0074 Bq.1''1 and ranged from 0.056 to 0.082 Bq.1''1. (Author) 5 refs
Federal Regulations: Efforts to Estimate Total Costs and Benefits of Rules
2004-04-07
the Chamber of Commerce , academicians, the media, and others, and is sometimes cited with a high degree of certainty ." For example, some articles...House of Representatives, Feb . 25,2004; and testimony of William P . Kovacs, Vice President, U .S. Chamber of Commerce , before the Subcommittee on Energy...estimated the annual cost to employers of the Family and Medical Leave Act at $825 million, but that the Chamber of Commerce estimated the cost at between $3
International Nuclear Information System (INIS)
Bachoc, Francois
2014-01-01
Covariance parameter estimation of Gaussian processes is analyzed in an asymptotic framework. The spatial sampling is a randomly perturbed regular grid and its deviation from the perfect regular grid is controlled by a single scalar regularity parameter. Consistency and asymptotic normality are proved for the Maximum Likelihood and Cross Validation estimators of the covariance parameters. The asymptotic covariance matrices of the covariance parameter estimators are deterministic functions of the regularity parameter. By means of an exhaustive study of the asymptotic covariance matrices, it is shown that the estimation is improved when the regular grid is strongly perturbed. Hence, an asymptotic confirmation is given to the commonly admitted fact that using groups of observation points with small spacing is beneficial to covariance function estimation. Finally, the prediction error, using a consistent estimator of the covariance parameters, is analyzed in detail. (authors)
International Nuclear Information System (INIS)
Bhalke, Sunil; Raghunath, Radha; Mishra, Suchismita; Suseela, B.; Tripathi, R.M.; Pandit, G.G.; Shukla, V.K.; Puranik, V.D.
2005-01-01
A method is standardized for the estimation of uranium by adsorptive stripping voltammetry using chloranilic acid (CAA) as complexing agent. The optimum parameters to get best sensitivity and good reproducibility for uranium were 60s adsorption time, pH 1.8, chloranilic acid (2x10 -4 M) and 0.002M EDTA. The peak potential under this condition was found to be -0.03 V. With these optimum parameters a sensitivity of 1.19 nA/nM uranium was observed. Detection limit for this optimum parameter was found to be 0.55 nM. This can be further improved by increasing adsorption time. Using this method, uranium was estimated in different type of water samples such as seawater, synthetic seawater, stream water, tap water, well water, bore well water and process water. This method has also been used for estimation of uranium in sand, organic solvent used for extraction of uranium from phosphoric acid and its raffinate. Sample digestion procedures used for estimation of uranium in various matrices are discussed. It has been observed from the analysis that the uranium peak potentials changes with matrix of the sample, hence, standard addition method is the best method to get reliable and accurate results. Quality assurance of the standardized method is verified by analyzing certified reference water sample from USDOE, participating intercomparison exercises and also by estimating uranium content in water samples both by differential pulse adsorptive stripping voltammetric and laser fluorimetric techniques. (author)
Bell, Kristie L; Boyd, Roslyn N; Walker, Jacqueline L; Stevenson, Richard D; Davies, Peter S W
2013-08-01
Body composition assessment is an essential component of nutritional evaluation in children with cerebral palsy. This study aimed to validate bioelectrical impedance to estimate total body water in young children with cerebral palsy and determine best electrode placement in unilateral impairment. 55 young children with cerebral palsy across all functional ability levels were included. Height/length was measured or estimated from knee height. Total body water was estimated using a Bodystat 1500MDD and three equations, and measured using the gold standard, deuterium dilution technique. Comparisons were made using Bland Altman analysis. For children with bilateral impairment, the Fjeld equation estimated total body water with the least bias (limits of agreement): 0.0 L (-1.4 L to 1.5 L); the Pencharz equation produced the greatest: 2.7 L (0.6 L-4.8 L). For children with unilateral impairment, differences between measured and estimated total body water were lowest on the unimpaired side using the Fjeld equation 0.1 L (-1.5 L to 1.6 L)) and greatest for the Pencharz equation. The ability of bioelectrical impedance to estimate total body water depends on the equation chosen. The Fjeld equation was the most accurate for the group, however, individual results varied by up to 18%. A population specific equation was developed and may enhance the accuracy of estimates. Australian New Zealand Clinical Trials Registry (ANZCTR) number: ACTRN12611000616976. Copyright © 2012 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Density meter algorithm and system for estimating sampling/mixing uncertainty
International Nuclear Information System (INIS)
Shine, E.P.
1986-01-01
The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses
International Nuclear Information System (INIS)
Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing
2012-01-01
In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)
Density meter algorithm and system for estimating sampling/mixing uncertainty
International Nuclear Information System (INIS)
Shine, E.P.
1986-01-01
The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses
Xu, Huijun; Gordon, J James; Siebers, Jeffrey V
2011-02-01
A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D, exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structures meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals w (e.g., w = 1 degree, 2 degrees, 5 degrees, 10 degrees, 20 degrees). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment omega eff. In each direction, the DM was calculated by moving the structure in radial steps of size delta (=0.1, 0.2, 0.5, 1 mm) until the specified isodose was crossed. Coverage estimation accuracy deltaQ was quantified as a function of the sampling parameters omega or omega eff and delta. The
A model for estimating the minimum number of offspring to sample in studies of reproductive success.
Anderson, Joseph H; Ward, Eric J; Carlson, Stephanie M
2011-01-01
Molecular parentage permits studies of selection and evolution in fecund species with cryptic mating systems, such as fish, amphibians, and insects. However, there exists no method for estimating the number of offspring that must be assigned parentage to achieve robust estimates of reproductive success when only a fraction of offspring can be sampled. We constructed a 2-stage model that first estimated the mean (μ) and variance (v) in reproductive success from published studies on salmonid fishes and then sampled offspring from reproductive success distributions simulated from the μ and v estimates. Results provided strong support for modeling salmonid reproductive success via the negative binomial distribution and suggested that few offspring samples are needed to reject the null hypothesis of uniform offspring production. However, the sampled reproductive success distributions deviated significantly (χ(2) goodness-of-fit test p value reproductive success distribution at rates often >0.05 and as high as 0.24, even when hundreds of offspring were assigned parentage. In general, reproductive success patterns were less accurate when offspring were sampled from cohorts with larger numbers of parents and greater variance in reproductive success. Our model can be reparameterized with data from other species and will aid researchers in planning reproductive success studies by providing explicit sampling targets required to accurately assess reproductive success.
A rapid method for estimation of Pu-isotopes in urine samples using high volume centrifuge.
Kumar, Ranjeet; Rao, D D; Dubla, Rupali; Yadav, J R
2017-07-01
The conventional radio-analytical technique used for estimation of Pu-isotopes in urine samples involves anion exchange/TEVA column separation followed by alpha spectrometry. This sequence of analysis consumes nearly 3-4 days for completion. Many a times excreta analysis results are required urgently, particularly under repeat and incidental/emergency situations. Therefore, there is need to reduce the analysis time for the estimation of Pu-isotopes in bioassay samples. This paper gives the details of standardization of a rapid method for estimation of Pu-isotopes in urine samples using multi-purpose centrifuge, TEVA resin followed by alpha spectrometry. The rapid method involves oxidation of urine samples, co-precipitation of plutonium along with calcium phosphate followed by sample preparation using high volume centrifuge and separation of Pu using TEVA resin. Pu-fraction was electrodeposited and activity estimated using 236 Pu tracer recovery by alpha spectrometry. Ten routine urine samples of radiation workers were analyzed and consistent radiochemical tracer recovery was obtained in the range 47-88% with a mean and standard deviation of 64.4% and 11.3% respectively. With this newly standardized technique, the whole analytical procedure is completed within 9h (one working day hour). Copyright © 2017 Elsevier Ltd. All rights reserved.
Qing, Siyu
2014-01-01
The National Science Foundation (NSF) Survey of Doctorate Recipients (SDR) collects information on a sample of individuals in the United States with PhD degrees. A significant portion of the sampled individuals appear in multiple survey years and can be linked across time. Survey weights in each year are created and adjusted for oversampling and…
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
International Nuclear Information System (INIS)
Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
International Nuclear Information System (INIS)
Yoo, Seung-Hoon; Lim, Hea-Jin; Kwak, Seung-Jun
2009-01-01
Over the last twenty years, the consumption of natural gas in Korea has increased dramatically. This increase has mainly resulted from the rise of consumption in the residential sector. The main objective of the study is to estimate households' demand function for natural gas by applying a sample selection model using data from a survey of households in Seoul. The results show that there exists a selection bias in the sample and that failure to correct for sample selection bias distorts the mean estimate, of the demand for natural gas, downward by 48.1%. In addition, according to the estimation results, the size of the house, the dummy variable for dwelling in an apartment, the dummy variable for having a bed in an inner room, and the household's income all have positive relationships with the demand for natural gas. On the other hand, the size of the family and the price of gas negatively contribute to the demand for natural gas. (author)
Reliability estimation system: its application to the nuclear geophysical sampling of ore deposits
International Nuclear Information System (INIS)
Khaykovich, I.M.; Savosin, S.I.
1992-01-01
The reliability estimation system accepted in the Soviet Union for sampling data in nuclear geophysics is based on unique requirements in metrology and methodology. It involves estimating characteristic errors in calibration, as well as errors in measurement and interpretation. This paper describes the methods of estimating the levels of systematic and random errors at each stage of the problem. The data of nuclear geophysics sampling are considered to be reliable if there are no statistically significant, systematic differences between ore intervals determined by this method and by geological control, or by other methods of sampling; the reliability of the latter having been verified. The difference between the random errors is statistically insignificant. The system allows one to obtain information on the parameters of ore intervals with a guaranteed random error and without systematic errors. (Author)
Omer, Muhammad
2012-07-01
This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity and decrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimental results. © 2012 IEEE.
Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats
Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.
2012-01-01
This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.
Some remarks on estimating a covariance structure model from a sample correlation matrix
Maydeu Olivares, Alberto; Hernández Estrada, Adolfo
2000-01-01
A popular model in structural equation modeling involves a multivariate normal density with a structured covariance matrix that has been categorized according to a set of thresholds. In this setup one may estimate the covariance structure parameters from the sample tetrachoricl polychoric correlations but only if the covariance structure is scale invariant. Doing so when the covariance structure is not scale invariant results in estimating a more restricted covariance structure than the one i...
DEFF Research Database (Denmark)
Nielsen, Rasmus; Korneliussen, Thorfinn Sand; Albrechtsen, Anders
2012-01-01
We present a statistical framework for estimation and application of sample allele frequency spectra from New-Generation Sequencing (NGS) data. In this method, we first estimate the allele frequency spectrum using maximum likelihood. In contrast to previous methods, the likelihood function is cal...... be extended to various other cases including cases with deviations from Hardy-Weinberg equilibrium. We evaluate the statistical properties of the methods using simulations and by application to a real data set....
Yamaura, Yuichi; Connor, Edward F.; Royle, Andy; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio
2016-01-01
Models and data used to describe species–area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species–area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species–area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density–area relationships and occurrence probability–area relationships can alter the form of species–area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied
Carroll, Raymond J.
2010-05-01
This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.
Holland, Alexander; Aboy, Mateo
2009-07-01
We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.
Omulo, Sylvia; Lofgren, Eric T; Mugoh, Maina; Alando, Moshe; Obiya, Joshua; Kipyegon, Korir; Kikwai, Gilbert; Gumbi, Wilson; Kariuki, Samuel; Call, Douglas R
2017-05-01
Investigators often rely on studies of Escherichia coli to characterize the burden of antibiotic resistance in a clinical or community setting. To determine if prevalence estimates for antibiotic resistance are sensitive to sample handling and interpretive criteria, we collected presumptive E. coli isolates (24 or 95 per stool sample) from a community in an urban informal settlement in Kenya. Isolates were tested for susceptibility to nine antibiotics using agar breakpoint assays and results were analyzed using generalized linear mixed models. We observed a 0.1). Prevalence estimates did not differ for five distinct E. coli colony morphologies on MacConkey agar plates (P>0.2). Successive re-plating of samples for up to five consecutive days had little to no impact on prevalence estimates. Finally, culturing E. coli under different conditions (with 5% CO 2 or micro-aerobic) did not affect estimates of prevalence. For the conditions tested in these experiments, minor modifications in sample processing protocols are unlikely to bias estimates of the prevalence of antibiotic-resistance for fecal E. coli. Copyright © 2017 Elsevier B.V. All rights reserved.
Stereological estimation of total cell numbers in the human cerebral and cerebellar cortex
DEFF Research Database (Denmark)
Walløe, Solveig; Pakkenberg, Bente; Fabricius, Katrine
2014-01-01
estimates and were often very time-consuming. Within the last 20-30 years, it has become possible to rely on more advanced and unbiased methods. These methods have provided us with information about fetal brain development, differences in cell numbers between men and women, the effect of age on selected...
Babcock, Chad; Finley, Andrew O.; Andersen, Hans-Erik; Pattison, Robert; Cook, Bruce D.; Morton, Douglas C.; Alonzo, Michael; Nelson, Ross; Gregoire, Timothy; Ene, Liviu; Gobakken, Terje; Næsset, Erik
2018-06-01
The goal of this research was to develop and examine the performance of a geostatistical coregionalization modeling approach for combining field inventory measurements, strip samples of airborne lidar and Landsat-based remote sensing data products to predict aboveground biomass (AGB) in interior Alaska's Tanana Valley. The proposed modeling strategy facilitates pixel-level mapping of AGB density predictions across the entire spatial domain. Additionally, the coregionalization framework allows for statistically sound estimation of total AGB for arbitrary areal units within the study area---a key advance to support diverse management objectives in interior Alaska. This research focuses on appropriate characterization of prediction uncertainty in the form of posterior predictive coverage intervals and standard deviations. Using the framework detailed here, it is possible to quantify estimation uncertainty for any spatial extent, ranging from pixel-level predictions of AGB density to estimates of AGB stocks for the full domain. The lidar-informed coregionalization models consistently outperformed their counterpart lidar-free models in terms of point-level predictive performance and total AGB precision. Additionally, the inclusion of Landsat-derived forest cover as a covariate further improved estimation precision in regions with lower lidar sampling intensity. Our findings also demonstrate that model-based approaches that do not explicitly account for residual spatial dependence can grossly underestimate uncertainty, resulting in falsely precise estimates of AGB. On the other hand, in a geostatistical setting, residual spatial structure can be modeled within a Bayesian hierarchical framework to obtain statistically defensible assessments of uncertainty for AGB estimates.
Evaluating the performance of species richness estimators: sensitivity to sample grain size
DEFF Research Database (Denmark)
Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara
2006-01-01
and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3. Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar....... Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...
A NEW METHOD FOR NON DESTRUCTIVE ESTIMATION OF Jc IN YBaCuO CERAMIC SAMPLES
Directory of Open Access Journals (Sweden)
Giancarlo Cordeiro Costa
2014-12-01
Full Text Available This work presents a new method for estimation of Jc as a bulk characteristic of YBCO blocks. The experimental magnetic interaction force between a SmCo permanent magnet and a YBCO block was compared to finite element method (FEM simulations results, allowing us to search a best fitting value to the critical current of the superconducting sample. As FEM simulations were based on Bean model , the critical current density was taken as an unknown parameter. This is a non destructive estimation method. since there is no need of breaking even a little piece of the sample for analysis.
Estimation of Uncertainty in Aerosol Concentration Measured by Aerosol Sampling System
Energy Technology Data Exchange (ETDEWEB)
Lee, Jong Chan; Song, Yong Jae; Jung, Woo Young; Lee, Hyun Chul; Kim, Gyu Tae; Lee, Doo Yong [FNC Technology Co., Yongin (Korea, Republic of)
2016-10-15
FNC Technology Co., Ltd has been developed test facilities for the aerosol generation, mixing, sampling and measurement under high pressure and high temperature conditions. The aerosol generation system is connected to the aerosol mixing system which injects SiO{sub 2}/ethanol mixture. In the sampling system, glass fiber membrane filter has been used to measure average mass concentration. Based on the experimental results using main carrier gas of steam and air mixture, the uncertainty estimation of the sampled aerosol concentration was performed by applying Gaussian error propagation law. FNC Technology Co., Ltd. has been developed the experimental facilities for the aerosol measurement under high pressure and high temperature. The purpose of the tests is to develop commercial test module for aerosol generation, mixing and sampling system applicable to environmental industry and safety related system in nuclear power plant. For the uncertainty calculation of aerosol concentration, the value of the sampled aerosol concentration is not measured directly, but must be calculated from other quantities. The uncertainty of the sampled aerosol concentration is a function of flow rates of air and steam, sampled mass, sampling time, condensed steam mass and its absolute errors. These variables propagate to the combination of variables in the function. Using operating parameters and its single errors from the aerosol test cases performed at FNC, the uncertainty of aerosol concentration evaluated by Gaussian error propagation law is less than 1%. The results of uncertainty estimation in the aerosol sampling system will be utilized for the system performance data.
International Nuclear Information System (INIS)
Ferretti, M.; Brambilla, E.; Brunialti, G.; Fornasier, F.; Mazzali, C.; Giordani, P.; Nimis, P.L.
2004-01-01
Sampling requirements related to lichen biomonitoring include optimal sampling density for obtaining precise and unbiased estimates of population parameters and maps of known reliability. Two available datasets on a sub-national scale in Italy were used to determine a cost-effective sampling density to be adopted in medium-to-large-scale biomonitoring studies. As expected, the relative error in the mean Lichen Biodiversity (Italian acronym: BL) values and the error associated with the interpolation of BL values for (unmeasured) grid cells increased as the sampling density decreased. However, the increase in size of the error was not linear and even a considerable reduction (up to 50%) in the original sampling effort led to a far smaller increase in errors in the mean estimates (<6%) and in mapping (<18%) as compared with the original sampling densities. A reduction in the sampling effort can result in considerable savings of resources, which can then be used for a more detailed investigation of potentially problematic areas. It is, however, necessary to decide the acceptable level of precision at the design stage of the investigation, so as to select the proper sampling density. - An acceptable level of precision must be decided before determining a sampling design
A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.
Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa
2016-05-17
Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.
Brownscombe, J W; Lennox, R J; Danylchuk, A J; Cooke, S J
2018-06-21
Accelerometry is growing in popularity for remotely measuring fish swimming metrics, but appropriate sampling frequencies for accurately measuring these metrics are not well studied. This research examined the influence of sampling frequency (1-25 Hz) with tri-axial accelerometer biologgers on estimates of overall dynamic body acceleration (ODBA), tail-beat frequency, swimming speed and metabolic rate of bonefish Albula vulpes in a swim-tunnel respirometer and free-swimming in a wetland mesocosm. In the swim tunnel, sampling frequencies of ≥ 5 Hz were sufficient to establish strong relationships between ODBA, swimming speed and metabolic rate. However, in free-swimming bonefish, estimates of metabolic rate were more variable below 10 Hz. Sampling frequencies should be at least twice the maximum tail-beat frequency to estimate this metric effectively, which is generally higher than those required to estimate ODBA, swimming speed and metabolic rate. While optimal sampling frequency probably varies among species due to tail-beat frequency and swimming style, this study provides a reference point with a medium body-sized sub-carangiform teleost fish, enabling researchers to measure these metrics effectively and maximize study duration. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Comparison of chlorzoxazone one-sample methods to estimate CYP2E1 activity in humans
DEFF Research Database (Denmark)
Kramer, Iza; Dalhoff, Kim; Clemmesen, Jens O
2003-01-01
OBJECTIVE: Comparison of a one-sample with a multi-sample method (the metabolic fractional clearance) to estimate CYP2E1 activity in humans. METHODS: Healthy, male Caucasians ( n=19) were included. The multi-sample fractional clearance (Cl(fe)) of chlorzoxazone was compared with one...... estimates, Cl(est) at 3 h or 6 h, and MR at 3 h, can serve as reliable markers of CYP2E1 activity. The one-sample clearance method is an accurate, renal function-independent measure of the intrinsic activity; it is simple to use and easily applicable to humans.......-time-point clearance estimation (Cl(est)) at 3, 4, 5 and 6 h. Furthermore, the metabolite/drug ratios (MRs) estimated from one-time-point samples at 1, 2, 3, 4, 5 and 6 h were compared with Cl(fe). RESULTS: The concordance between Cl(est) and Cl(fe) was highest at 6 h. The minimal mean prediction error (MPE) of Cl...
Mosbrucker, Adam; Spicer, Kurt R.; Christianson, Tami; Uhrich, Mark A.
2015-01-01
data range among sensors. Of greatest interest to many programs is a hysteresis in the relationship between turbidity and SSC, attributed to temporal variation of particle size distribution (Landers and Sturm, 2013; Uhrich et al., 2014). This phenomenon causes increased uncertainty in regression-estimated values of SSC, due to changes in nephelometric reflectance off the varying grain sizes in suspension (Uhrich et al., 2014). Here, we assess the feasibility and application of close-range remote sensing to quantify SSC and particle size distribution of a disturbed, and highly-turbid, river system. We use a consumer-grade digital camera to acquire imagery of the river surface and a depth-integrating sampler to collect concurrent suspended-sediment samples. We then develop two empirical linear regression models to relate image spectral information to concentrations of fine sediment (clay to silt) and total suspended sediment. Before presenting our regression model development, we briefly summarize each data-acquisition method.
Directory of Open Access Journals (Sweden)
Anders K. Mortensen
2017-12-01
Full Text Available The clover-grass ratio is an important factor in composing feed ratios for livestock. Cameras in the field allow the user to estimate the clover-grass ratio using image analysis; however, current methods assume the total dry matter is known. This paper presents the preliminary results of an image analysis method for non-destructively estimating the total dry matter of clover-grass. The presented method includes three steps: (1 classification of image illumination using a histogram of the difference in excess green and excess red; (2 segmentation of clover and grass using edge detection and morphology; and (3 estimation of total dry matter using grass coverage derived from the segmentation and climate parameters. The method was developed and evaluated on images captured in a clover-grass plot experiment during the spring growing season. The preliminary results are promising and show a high correlation between the image-based total dry matter estimate and the harvested dry matter ( R 2 = 0.93 with an RMSE of 210 kg ha − 1 .
Total antioxidant capacity (TAC) provides an assessment of antioxidant activity and synergistic interactions of redox molecules in foods and plasma. We investigated the validity and reproducibility of food frequency questionnaire (FFQ)–based TAC estimates assessed by oxygen radical absorbance capaci...
G. Doblhammer (Gabriele); Milewski, N. (Nadja); F. Peters (Frederick)
2010-01-01
textabstractThis paper introduces a set of methods for estimating fertility indicators in the absence of recent and short-term birth statistics. For Germany, we propose a set of straightforward methods that allow for the computation of monthly and yearly total fertility rates (mTFR) on the basis of
International Nuclear Information System (INIS)
Murarka, I.P.; Bodeau, D.J.
1977-11-01
This report contains a description of three computer programs that implement the theory of sampling designs and the methods for estimating fish-impingement at the cooling-water intakes of nuclear power plants as described in companion report ANL/ES-60. Complete FORTRAN listings of these programs, named SAMPLE, ESTIMA, and SIZECO, are given and augmented with examples of how they are used
Estimating and localizing the algebraic and total numerical errors using flux reconstructions
Czech Academy of Sciences Publication Activity Database
Papež, Jan; Strakoš, Z.; Vohralík, M.
2018-01-01
Roč. 138, č. 3 (2018), s. 681-721 ISSN 0029-599X R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * a posteriori error estimation * algebraic error * discretization error * stopping criteria * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016
Spencer, S.; Ogle, S.; Borch, T.; Rock, B.
2008-12-01
Monitoring soil C stocks is critical to assess the impact of future climate and land use change on carbon sinks and sources in agricultural lands. A benchmark network for soil carbon monitoring of stock changes is being designed for US agricultural lands with 3000-5000 sites anticipated and re-sampling on a 5- to10-year basis. Approximately 1000 sites would be sampled per year producing around 15,000 soil samples to be processed for total, organic, and inorganic carbon, as well as bulk density and nitrogen. Laboratory processing of soil samples is cost and time intensive, therefore we are testing the efficacy of using near-infrared (NIR) and mid-infrared (MIR) spectral methods for estimating soil carbon. As part of an initial implementation of national soil carbon monitoring, we collected over 1800 soil samples from 45 cropland sites in the mid-continental region of the U.S. Samples were processed using standard laboratory methods to determine the variables above. Carbon and nitrogen were determined by dry combustion and inorganic carbon was estimated with an acid-pressure test. 600 samples are being scanned using a bench- top NIR reflectance spectrometer (30 g of 2 mm oven-dried soil and 30 g of 8 mm air-dried soil) and 500 samples using a MIR Fourier-Transform Infrared Spectrometer (FTIR) with a DRIFT reflectance accessory (0.2 g oven-dried ground soil). Lab-measured carbon will be compared to spectrally-estimated carbon contents using Partial Least Squares (PLS) multivariate statistical approach. PLS attempts to develop a soil C predictive model that can then be used to estimate C in soil samples not lab-processed. The spectral analysis of soil samples either whole or partially processed can potentially save both funding resources and time to process samples. This is particularly relevant for the implementation of a national monitoring network for soil carbon. This poster will discuss our methods, initial results and potential for using NIR and MIR spectral
TOTAL WOOD VOLUME ESTIMATION OF EUCALYPTUS SPECIES BY IMAGES OF LANDSAT SATELLITE
Directory of Open Access Journals (Sweden)
Elias Fernando Berra
2012-12-01
Full Text Available http://dx.doi.org/10.5902/198050987566Models relating spectral answers with biophysical parameters aim estimate variables, like wood volume, without the necessity of frequent field measurements. The objective was to develop models to estimate wood volume by Landsat 5 TM images, supported by regional forest inventory data. The image was geo-referenced and converted to spectral reflectance. After, the images-index NDVI (Normalized Difference Vegetation Index and SR (Simple Ratio was generated. The reflectance values of the bands (TM1, TM2, TM3 e TM4 and of the indices (NDVI and SR was related with the wood volume. The biggest correlation with volume was with the NDVI and SR indices. The variables selection was made by Stepwise method, which returned three regression models as significant to explain the variation in volume. Finally, the best fitted model was selected (volume = -830,95 + 46,05 (SR + 107,47 (TM2, which was applied on the Landsat image where the pixels had started to represent the estimated volume in m³/ha on the Eucalyptus sp. production units. This model, significant at 95% confidence level, explains 68% of the wood volume variation.
Estimating total maximum daily loads with the Stochastic Empirical Loading and Dilution Model
Granato, Gregory; Jones, Susan Cheung
2017-01-01
The Massachusetts Department of Transportation (DOT) and the Rhode Island DOT are assessing and addressing roadway contributions to total maximum daily loads (TMDLs). Example analyses for total nitrogen, total phosphorus, suspended sediment, and total zinc in highway runoff were done by the U.S. Geological Survey in cooperation with FHWA to simulate long-term annual loads for TMDL analyses with the stochastic empirical loading and dilution model known as SELDM. Concentration statistics from 19 highway runoff monitoring sites in Massachusetts were used with precipitation statistics from 11 long-term monitoring sites to simulate long-term pavement yields (loads per unit area). Highway sites were stratified by traffic volume or surrounding land use to calculate concentration statistics for rural roads, low-volume highways, high-volume highways, and ultraurban highways. The median of the event mean concentration statistics in each traffic volume category was used to simulate annual yields from pavement for a 29- or 30-year period. Long-term average yields for total nitrogen, phosphorus, and zinc from rural roads are lower than yields from the other categories, but yields of sediment are higher than for the low-volume highways. The average yields of the selected water quality constituents from high-volume highways are 1.35 to 2.52 times the associated yields from low-volume highways. The average yields of the selected constituents from ultraurban highways are 1.52 to 3.46 times the associated yields from high-volume highways. Example simulations indicate that both concentration reduction and flow reduction by structural best management practices are crucial for reducing runoff yields.
Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.
Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko
2017-06-01
Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.
Directory of Open Access Journals (Sweden)
Singh Lalji
2006-10-01
Full Text Available Abstract Background Bengal tiger Panthera tigris tigris the National Animal of India, is an endangered species. Estimating populations for such species is the main objective for designing conservation measures and for evaluating those that are already in place. Due to the tiger's cryptic and secretive behaviour, it is not possible to enumerate and monitor its populations through direct observations; instead indirect methods have always been used for studying tigers in the wild. DNA methods based on non-invasive sampling have not been attempted so far for tiger population studies in India. We describe here a pilot study using DNA extracted from faecal samples of tigers for the purpose of population estimation. Results In this study, PCR primers were developed based on tiger-specific variations in the mitochondrial cytochrome b for reliably identifying tiger faecal samples from those of sympatric carnivores. Microsatellite markers were developed for the identification of individual tigers with a sibling Probability of Identity of 0.005 that can distinguish even closely related individuals with 99.9% certainty. The effectiveness of using field-collected tiger faecal samples for DNA analysis was evaluated by sampling, identification and subsequently genotyping samples from two protected areas in southern India. Conclusion Our results demonstrate the feasibility of using tiger faecal matter as a potential source of DNA for population estimation of tigers in protected areas in India in addition to the methods currently in use.
International Nuclear Information System (INIS)
Zio, E.; Pedroni, N.
2010-01-01
The quantitative reliability assessment of a thermal-hydraulic (T-H) passive safety system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. In this work, Line Sampling (LS) is adopted for efficient MC sampling. In the LS method, an 'important direction' pointing towards the failure domain of interest is determined and a number of conditional one-dimensional problems are solved along such direction; this allows for a significant reduction of the variance of the failure probability estimator, with respect, for example, to standard random sampling. Two issues are still open with respect to LS: first, the method relies on the determination of the 'important direction', which requires additional runs of the T-H code; second, although the method has been shown to improve the computational efficiency by reducing the variance of the failure probability estimator, no evidence has been given yet that accurate and precise failure probability estimates can be obtained with a number of samples reduced to below a few hundreds, which may be required in case of long-running models. The work presented in this paper addresses the first issue by (i) quantitatively comparing the efficiency of the methods proposed in the literature to determine the LS important direction; (ii) employing artificial neural network (ANN) regression models as fast-running surrogates of the original, long-running T-H code to reduce the computational cost associated to the
Is a 'convenience' sample useful for estimating immunization coverage in a small population?
Weir, Jean E; Jones, Carrie
2008-01-01
Rapid survey methodologies are widely used for assessing immunization coverage in developing countries, approximating true stratified random sampling. Non-random ('convenience') sampling is not considered appropriate for estimating immunization coverage rates but has the advantages of low cost and expediency. We assessed the validity of a convenience sample of children presenting to a travelling clinic by comparing the coverage rate in the convenience sample to the true coverage established by surveying each child in three villages in rural Papua New Guinea. The rate of DTF immunization coverage as estimated by the convenience sample was within 10% of the true coverage when the proportion of children in the sample was two-thirds or when only children over the age of one year were counted, but differed by 11% when the sample included only 53% of the children and when all eligible children were included. The convenience sample may be sufficiently accurate for reporting purposes and is useful for identifying areas of low coverage.
Estimation of Total Yearly CO2 Emissions by Wildfires in Mexico during the Period 1999–2010
Directory of Open Access Journals (Sweden)
Flor Bautista Vicente
2014-01-01
Full Text Available The phenomenon of wildfires became a global environmental problem which demands estimations of their CO2 emissions. Wildfires have deteriorated the air quality increasingly. Using available information on documented wildfires and a data set of satellite detected hot spots, total yearly emissions of CO2 in Mexico were estimated for the period 1999–2010. A map of the main vegetation groups was used to calculate total areas for every vegetation type. The yearly number of hot spots per vegetation type was calculated. Estimates of emitted CO2 in a wildfire were then accomplished by considering parameters such as: forest fuel load, vegetation type, burning efficiency, and mean burned area. The number of wildfires and total affected areas showed an annual variability. The yearly mean of affected area by a single wildfire varied between 0.2 and 0.3 km2. The total affected area during the period 1999 to 2010 was 86800 km2 which corresponds to 4.3% of the Mexican territory. Total CO2 emissions were approximately 112 Tg. The most affected vegetation types were forest and rainforest.
Directory of Open Access Journals (Sweden)
Fatemeh Zare Baghiabad
2017-09-01
Full Text Available Accuracy in estimating the needed effort for software development caused software effort estimation to be a challenging issue. Beside estimation of total effort, determining the effort elapsed in each software development step is very important because any mistakes in enterprise resource planning can lead to project failure. In this paper, a Bayesian belief network was proposed based on effective components and software development process. In this model, the feedback loops are considered between development steps provided that the return rates are different for each project. Different return rates help us determine the percentages of the elapsed effort in each software development step, distinctively. Moreover, the error measurement resulted from optimized effort estimation and the optimal coefficients to modify the model are sought. The results of the comparison between the proposed model and other models showed that the model has the capability to highly accurately estimate the total effort (with the marginal error of about 0.114 and to estimate the effort elapsed in each software development step.
Directory of Open Access Journals (Sweden)
Gener Tadeu Pereira
2013-10-01
Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.
Energy Technology Data Exchange (ETDEWEB)
McLean, Christopher T. [Univ. of New Mexico, Albuquerque, NM (United States)
2000-07-01
Los Alamos National Laboratory has a long-standing program of sampling storm water runoff inside the Laboratory boundaries. In 1995, the Laboratory started collecting the samples using automated storm water sampling stations; prior to this time the samples were collected manually. The Laboratory has also been periodically collecting sediment samples from Cochiti Lake. This paper presents the data for Pu-238 and Pu-239 bound to the sediments for Los Alamos Canyon storm water runoff and compares the sampling types by mass loading and as a percentage of the sediment deposition to Cochiti Lake. The data for both manual and automated sampling are used to calculate mass loads from Los Alamos Canyon on a yearly basis. The automated samples show mass loading 200- 500 percent greater for Pu-238 and 300-700 percent greater for Pu-239 than the manual samples. Using the mean manual flow volume for mass loading calculations, the automated samples are over 900 percent greater for Pu-238 and over 1800 percent greater for Pu-239. Evaluating the Pu-238 and Pu-239 activities as a percentage of deposition to Cochiti Lake indicates that the automated samples are 700-1300 percent greater for Pu- 238 and 200-500 percent greater for Pu-239. The variance was calculated by two methods. The first method calculates the variance for each sample event. The second method calculates the variances by the total volume of water discharged in Los Alamos Canyon for the year.
Energy Technology Data Exchange (ETDEWEB)
Galicia C, F. J.
2015-07-01
This study aimed to develop a methodology of preparation and quantification of samples containing radionuclides beta and/or alpha emitters, to determine the rates of alpha and beta total activity of radioactive waste samples. For this, a device of planchettes preparer was designed, to assist the planchettes preparation in a controlled environment and free of corrosive vapors. Planchettes were prepared in three means: nitrate, carbonate and sulfate, to different mass thickness, natural uranium (alpha and beta emitter) and in case of Sr-90 (beta emitter pure) only in half nitrate; and these planchettes were quantified in an alpha/beta counter, in order to construct the self-absorption curves for alpha and beta particles. These curves are necessary to determine the rate of alpha-beta activity of any sample because they provide the self-absorption correction factor to be applied in calculating the index. Samples with U were prepared with the help of the device of planchettes preparer and subsequently were analyzed in the proportional counter Mpc-100 Pic brand. Samples with Sr-90 were prepared without the device to see if there was a different behavior with respect to obtaining mass thickness. Similarly they were calcined and carried out count in the Mpc-100. To perform the count, first the parameters of counter operating were determined: operating voltages for alpha and beta particles 630 and 1500 V respectively, a count routine was generated where the time and count type were adjusted, and counting efficiencies for alpha and beta particles, with the aid of calibration sources of {sup 210}Po for alphas and {sup 90}Sr for betas. According to the results, the counts per minute will decrease as increasing the mass thickness of the sample (self-absorption curve), adjusting this behavior to an exponential function in all cases studied. The minor self-absorption of alpha and beta particles in the case of U was obtained in sulfate medium. The self-absorption curves of Sr-90
Estimating Inorganic Arsenic Exposure from U.S. Rice and Total Water Intakes.
Mantha, Madhavi; Yeary, Edward; Trent, John; Creed, Patricia A; Kubachka, Kevin; Hanley, Traci; Shockey, Nohora; Heitkemper, Douglas; Caruso, Joseph; Xue, Jianping; Rice, Glenn; Wymer, Larry; Creed, John T
2017-05-30
Among nonoccupationally exposed U.S. residents, drinking water and diet are considered primary exposure pathways for inorganic arsenic (iAs). In drinking water, iAs is the primary form of arsenic (As), while dietary As speciation techniques are used to differentiate iAs from less toxic arsenicals in food matrices. Our goal was to estimate the distribution of iAs exposure rates from drinking water intakes and rice consumption in the U.S. population and ethnic- and age-based subpopulations. The distribution of iAs in drinking water was estimated by population, weighting the iAs concentrations for each drinking water utility in the Second Six-Year Review data set. To estimate the distribution of iAs concentrations in rice ingested by U.S. consumers, 54 grain-specific, production-weighted composites of rice obtained from U.S. mills were extracted and speciated using both a quantitative dilute nitric acid extraction and speciation (DNAS) and an in vitro gastrointestinal assay to provide an upper bound and bioaccessible estimates, respectively. Daily drinking water intake and rice consumption rate distributions were developed using data from the What We Eat in America (WWEIA) study. Using these data sets, the Stochastic Human Exposure and Dose Simulation (SHEDS) model estimated mean iAs exposures from drinking water and rice were 4.2 μg/day and 1.4 μg/day, respectively, for the entire U.S. population. The Tribal, Asian, and Pacific population exhibited the highest mean daily exposure of iAs from cooked rice (2.8 μg/day); the mean exposure rate for children between ages 1 and 2 years in this population is 0.104 μg/kg body weight (BW)/day. An average consumer drinking 1.5 L of water daily that contains between 2 and 3 ng iAs/mL is exposed to approximately the same amount of iAs as a mean Tribal, Asian, and Pacific consumer is exposed to from rice. https://doi.org/10.1289/EHP418. Among nonoccupationally exposed U.S. residents, drinking water and diet are considered
Matrix algebra and sampling theory : The case of the Horvitz-Thompson estimator
Dol, W.; Steerneman, A.G.M.; Wansbeek, T.J.
Matrix algebra is a tool not commonly employed in sampling theory. The intention of this paper is to help change this situation by showing, in the context of the Horvitz-Thompson (HT) estimator, the convenience of the use of a number of matrix-algebra results. Sufficient conditions for the
Engemann, Kristine; Enquist, Brian J; Sandel, Brody; Boyle, Brad; Jørgensen, Peter M; Morueta-Holme, Naia; Peet, Robert K; Violle, Cyrille; Svenning, Jens-Christian
2015-02-01
Macro-scale species richness studies often use museum specimens as their main source of information. However, such datasets are often strongly biased due to variation in sampling effort in space and time. These biases may strongly affect diversity estimates and may, thereby, obstruct solid inference on the underlying diversity drivers, as well as mislead conservation prioritization. In recent years, this has resulted in an increased focus on developing methods to correct for sampling bias. In this study, we use sample-size-correcting methods to examine patterns of tropical plant diversity in Ecuador, one of the most species-rich and climatically heterogeneous biodiversity hotspots. Species richness estimates were calculated based on 205,735 georeferenced specimens of 15,788 species using the Margalef diversity index, the Chao estimator, the second-order Jackknife and Bootstrapping resampling methods, and Hill numbers and rarefaction. Species richness was heavily correlated with sampling effort, and only rarefaction was able to remove this effect, and we recommend this method for estimation of species richness with "big data" collections.
DEFF Research Database (Denmark)
Huber, Martin; Lechner, Michael; Mellace, Giovanni
Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independence...
DEFF Research Database (Denmark)
Huber, Martin; Lechner, Michael; Mellace, Giovanni
2016-01-01
Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independen...... of the methods often (but not always) varies with the features of the data generating process....
Infusion and sampling site effects on two-pool model estimates of leucine metabolism
International Nuclear Information System (INIS)
Helland, S.J.; Grisdale-Helland, B.; Nissen, S.
1988-01-01
To assess the effect of site of isotope infusion on estimates of leucine metabolism infusions of alpha-[4,5-3H]ketoisocaproate (KIC) and [U- 14 C]leucine were made into the left or right ventricles of sheep and pigs. Blood was sampled from the opposite ventricle. In both species, left ventricular infusions resulted in significantly lower specific radioactivities (SA) of [ 14 C]leucine and [ 3 H]KIC. [ 14 C]KIC SA was found to be insensitive to infusion and sampling sites. [ 14 C]KIC was in addition found to be equal to the SA of [ 14 C]leucine only during the left heart infusions. Therefore, [ 14 C]KIC SA was used as the only estimate for [ 14 C]SA in the equations for the two-pool model. This model eliminated the influence of site of infusion and blood sampling on the estimates for leucine entry and reduced the impact on the estimates for proteolysis and oxidation. This two-pool model could not compensate for the underestimation of transamination reactions occurring during the traditional venous isotope infusion and arterial blood sampling
Can a sample of Landsat sensor scenes reliably estimate the global extent of tropical deforestation?
R. L. Czaplewski
2003-01-01
Tucker and Townshend (2000) conclude that wall-to-wall coverage is needed to avoid gross errors in estimations of deforestation rates' because tropical deforestation is concentrated along roads and rivers. They specifically question the reliability of the 10% sample of Landsat sensor scenes used in the global remote sensing survey conducted by the Food and...
B-graph sampling to estimate the size of a hidden population
Spreen, M.; Bogaerts, S.
2015-01-01
Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
Estimating Total Fusion Cross Sections by Using a Coupled-Channel Method
Energy Technology Data Exchange (ETDEWEB)
Choi, Ki-Seok; Cheoun, Myung-Ki [Soongsil University, Seoul (Korea, Republic of); Kim, K. S. [Korea Aerospace University, Koyang (Korea, Republic of); Kim, T. H.; So, W. Y. [Kangwon National University at Dogye, Samcheok (Korea, Republic of)
2017-01-15
We calculate the total fusion cross sections for the {sup 6}He + {sup 209}Bi, {sup 6}Li + {sup 209}Bi,{sup 9}Be + {sup 208}Pb, {sup 10}Be + {sup 209}Bi, and {sup 11}Li + {sup 208}Pb systems by using a coupled-channel (CC) method and compare the results with the experimental data. In the CC approach for the total fusion cross sections, we exploit a globally determined Wood-Saxon potential with Aky¨uz-Winther parameters and couplings of the ground state to the low-lying excited states in the projectile and the target nuclei. The total fusion cross sections obtained with the CC are compared with those obtained without the CC couplings. The latter approach is tantamount to a one-dimensional barrier penetration model. Finally, our approach is applied to understand new data for the {sup 11}Li+{sup 208}Pb system. Possible ambiguities inherent in those approaches are discussed in detail for further applications to the fusion system of halo and/or neutron-rich nuclei.
Estimating Inorganic Arsenic Exposure from U.S. Rice and Total Water Intakes
Mantha, Madhavi; Yeary, Edward; Trent, John; Creed, Patricia A.; Kubachka, Kevin; Hanley, Traci; Shockey, Nohora; Heitkemper, Douglas; Caruso, Joseph; Xue, Jianping; Rice, Glenn; Wymer, Larry; Creed, John T.
2017-01-01
Background: Among nonoccupationally exposed U.S. residents, drinking water and diet are considered primary exposure pathways for inorganic arsenic (iAs). In drinking water, iAs is the primary form of arsenic (As), while dietary As speciation techniques are used to differentiate iAs from less toxic arsenicals in food matrices. Objectives: Our goal was to estimate the distribution of iAs exposure rates from drinking water intakes and rice consumption in the U.S. population and ethnic- and age-b...
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
DEFF Research Database (Denmark)
Scheike, Thomas Harder; Juul, Anders
2004-01-01
-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...
Point and Fixed Plot Sampling Inventory Estimates at the Savannah River Site, South Carolina.
Energy Technology Data Exchange (ETDEWEB)
Parresol, Bernard, R.
2004-02-01
This report provides calculation of systematic point sampling volume estimates for trees greater than or equal to 5 inches diameter breast height (dbh) and fixed radius plot volume estimates for trees < 5 inches dbh at the Savannah River Site (SRS), Aiken County, South Carolina. The inventory of 622 plots was started in March 1999 and completed in January 2002 (Figure 1). Estimates are given in cubic foot volume. The analyses are presented in a series of Tables and Figures. In addition, a preliminary analysis of fuel levels on the SRS is given, based on depth measurements of the duff and litter layers on the 622 inventory plots plus line transect samples of down coarse woody material. Potential standing live fuels are also included. The fuels analyses are presented in a series of tables.
Estimating an appropriate sampling frequency for monitoring ground water well contamination
International Nuclear Information System (INIS)
Tuckfield, R.C.
1994-01-01
Nearly 1,500 ground water wells at the Savannah River Site (SRS) are sampled quarterly to monitor contamination by radionuclides and other hazardous constituents from nearby waste sites. Some 10,000 water samples were collected in 1993 at a laboratory analysis cost of $10,000,000. No widely accepted statistical method has been developed, to date, for estimating a technically defensible ground water sampling frequency consistent and compliant with federal regulations. Such a method is presented here based on the concept of statistical independence among successively measured contaminant concentrations in time
Directory of Open Access Journals (Sweden)
Adriana Bruscato Bortoluzzo
2011-01-01
Full Text Available The objective of this article is to estimate insurance claims from an auto dataset using the Tweedie and zero-adjusted inverse Gaussian (ZAIG methods. We identify factors that influence claim size and probability, and compare the results of these methods which both forecast outcomes accurately. Vehicle characteristics like territory, age, origin and type distinctly influence claim size and probability. This distinct impact is not always present in the Tweedie estimated model. Auto insurers should consider estimating total claim size using both the Tweedie and ZAIG methods. This allows for an estimation of confidence interval based on empirical quantiles using bootstrap simulation. Furthermore, the fitted models may be useful in developing a strategy to obtain premium pricing.
Leslie, H.A.; Hermens, J.L.; Kraak, M.H.S.
2004-01-01
Body residues of compounds with a narcotic mode of action that exceed critical levels result in baseline toxicity in organisms. Previous studies have shown that internal concentrations in organisms also can be estimated by way of passive sampling. In this experiment, solid-phase microextraction
Estimation of the total daily oral intake of NDMA attributable to drinking water.
Fristachi, Anthony; Rice, Glenn
2007-09-01
Disinfection with chlorine and chloramine leads to the formation of many disinfection by-products including N-Nitrosodimethylamine (NDMA). Because NDMA is a probable human carcinogen, public health officials are concerned with its occurrence in drinking water. The goal of this study was to estimate NDMA concentrations from exogenous (i.e., drinking water and food) and endogenous (i.e., formed in the human body) sources, calculate average daily doses for ingestion route exposures and estimate the proportional oral intake (POI) of NDMA attributable to the consumption of drinking water relative to other ingestion sources of NDMA. The POI is predicted to be 0.02% relative to exogenous and endogenous NDMA sources combined. When only exogenous sources are considered, the POI was predicted to be 2.7%. The exclusion of endogenously formed NDMA causes the POI to increase dramatically, reflecting its importance as a potentially major source of exposure and uncertainty in the model. Although concentrations of NDMA in foods are small and human exposure to NDMA from foods is quite low, the contribution from food is predicted to be high relative to that of drinking water. The mean concentration of NDMA in drinking water would need to increase from 2.1 x 10(-3) microg/L to 0.10 microg/L, a 47-fold increase, for the POI to reach 1%, relative to all sources of NDMA considered in our model, suggesting that drinking water consumption is most likely a minor source of NDMA exposure.
The estimation of total body fat by inelastic neutron scattering - a geometrical feasibility study
International Nuclear Information System (INIS)
Lizos, F.; Kotzasarlidoou, M.; Makridou, A.; Giannopoulou, K.
2012-01-01
A rough quantitative representation of the basic elements in a human body is shown. It deals with a hypothetical, normal adult weighting 70 kg. It is possible to measure two basic quantities, the FFM, standing for Fat Free Mass and the FM, standing for Fat Mass. The present simulation deals with the most important aspect of the estimation of storage fat in the human body and in order to accomplish such a task, it is considered a representation of the human body, containing a uniform distribution of triacylglycerols, in a shape of cylindrical phantom. The whole process is analyzed and simulated by a geometrical model and with the aid of a computer program which takes into consideration the different attenuation for neutrons and photons, the amount of gamma radiation reaching the detector is also calculated. The net result is the determination of sensitivity for a particular set-up and by relating the out coming data to the amount of carbon; the quantity of fat is estimated. In addition, the non-uniformity is calculated, from the computer programs expressing the consistency of the system. In order to determine the storage fat, a simulation model that will enable to represent the detection of the carbon atoms in triacylglycerols was built
Directory of Open Access Journals (Sweden)
Femke Broekhuis
Full Text Available Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.
Broekhuis, Femke; Gopalaswamy, Arjun M
2016-01-01
Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.
Networked Estimation for Event-Based Sampling Systems with Packet Dropouts
Directory of Open Access Journals (Sweden)
Young Soo Suh
2009-04-01
Full Text Available This paper is concerned with a networked estimation problem in which sensor data are transmitted over the network. In the event-based sampling scheme known as level-crossing or send-on-delta (SOD, sensor data are transmitted to the estimator node if the difference between the current sensor value and the last transmitted one is greater than a given threshold. Event-based sampling has been shown to be more efficient than the time-triggered one in some situations, especially in network bandwidth improvement. However, it cannot detect packet dropout situations because data transmission and reception do not use a periodical time-stamp mechanism as found in time-triggered sampling systems. Motivated by this issue, we propose a modified event-based sampling scheme called modified SOD in which sensor data are sent when either the change of sensor output exceeds a given threshold or the time elapses more than a given interval. Through simulation results, we show that the proposed modified SOD sampling significantly improves estimation performance when packet dropouts happen.
International Nuclear Information System (INIS)
Morio, Jerome
2011-01-01
Importance sampling (IS) is a useful simulation technique to estimate critical probability with a better accuracy than Monte Carlo methods. It consists in generating random weighted samples from an auxiliary distribution rather than the distribution of interest. The crucial part of this algorithm is the choice of an efficient auxiliary PDF that has to be able to simulate more rare random events. The optimisation of this auxiliary distribution is often in practice very difficult. In this article, we propose to approach the IS optimal auxiliary density with non-parametric adaptive importance sampling (NAIS). We apply this technique for the probability estimation of spatial launcher impact position since it has currently become a more and more important issue in the field of aeronautics.
Agashiwala, Rajiv M; Louis, Elan D; Hof, Patrick R; Perl, Daniel P
2008-10-21
Non-biased systematic sampling using the principles of stereology provides accurate quantitative estimates of objects within neuroanatomic structures. However, the basic principles of stereology are not optimally suited for counting objects that selectively exist within a limited but complex and convoluted portion of the sample, such as occurs when counting cerebellar Purkinje cells. In an effort to quantify Purkinje cells in association with certain neurodegenerative disorders, we developed a new method for stereologic sampling of the cerebellar cortex, involving calculating the volume of the cerebellar tissues, identifying and isolating the Purkinje cell layer and using this information to extrapolate non-biased systematic sampling data to estimate the total number of Purkinje cells in the tissues. Using this approach, we counted Purkinje cells in the right cerebella of four human male control specimens, aged 41, 67, 70 and 84 years, and estimated the total Purkinje cell number for the four entire cerebella to be 27.03, 19.74, 20.44 and 22.03 million cells, respectively. The precision of the method is seen when comparing the density of the cells within the tissue: 266,274, 173,166, 167,603 and 183,575 cells/cm3, respectively. Prior literature documents Purkinje cell counts ranging from 14.8 to 30.5 million cells. These data demonstrate the accuracy of our approach. Our novel approach, which offers an improvement over previous methodologies, is of value for quantitative work of this nature. This approach could be applied to morphometric studies of other similarly complex tissues as well.
Monte Carlo-based development of a shield and total background estimation for the COBRA experiment
International Nuclear Information System (INIS)
Heidrich, Nadine
2014-11-01
The COBRA experiment aims for the measurement of the neutrinoless double beta decay and thus for the determination the effective Majorana mass of the neutrino. To be competitive with other next-generation experiments the background rate has to be in the order of 10 -3 counts/kg/keV/yr, which is a challenging criterion. This thesis deals with the development of a shield design and the calculation of the expected total background rate for the large scale COBRA experiment containing 13824 6 cm 3 CdZnTe detectors. For the development of a shield single-layer and multi-layer shields were investigated and a shield design was optimized concerning high-energy muon-induced neutrons. As the best design the combination of 10 cm boron doped polyethylene as outermost layer, 20 cm lead and 10 cm copper as innermost layer were determined. It showed the best performance regarding neutron attenuation as well as (n, γ) self-shielding effects leading to a negligible background rate of less than 2.10 -6 counts/kg/keV/yr. Additionally. the shield with a thickness of 40 cm is compact and costeffective. In the next step the expected total background rate was computed taking into account individual setup parts and various background sources including natural and man-made radioactivity, cosmic ray-induced background and thermal neutrons. Furthermore, a comparison of measured data from the COBRA demonstrator setup with Monte Carlo data was used to calculate reliable contamination levels of the single setup parts. The calculation was performed conservatively to prevent an underestimation. In addition, the contribution to the total background rate regarding the individual detector parts and background sources was investigated. The main portion arise from the Delrin support structure, the Glyptal lacquer followed by the circuit board of the high voltage supply. Most background events originate from particles with a quantity of 99 % in total. Regarding surface events a contribution of 26
Monte Carlo-based development of a shield and total background estimation for the COBRA experiment
Energy Technology Data Exchange (ETDEWEB)
Heidrich, Nadine
2014-11-15
The COBRA experiment aims for the measurement of the neutrinoless double beta decay and thus for the determination the effective Majorana mass of the neutrino. To be competitive with other next-generation experiments the background rate has to be in the order of 10{sup -3} counts/kg/keV/yr, which is a challenging criterion. This thesis deals with the development of a shield design and the calculation of the expected total background rate for the large scale COBRA experiment containing 13824 6 cm{sup 3} CdZnTe detectors. For the development of a shield single-layer and multi-layer shields were investigated and a shield design was optimized concerning high-energy muon-induced neutrons. As the best design the combination of 10 cm boron doped polyethylene as outermost layer, 20 cm lead and 10 cm copper as innermost layer were determined. It showed the best performance regarding neutron attenuation as well as (n, γ) self-shielding effects leading to a negligible background rate of less than 2.10{sup -6} counts/kg/keV/yr. Additionally. the shield with a thickness of 40 cm is compact and costeffective. In the next step the expected total background rate was computed taking into account individual setup parts and various background sources including natural and man-made radioactivity, cosmic ray-induced background and thermal neutrons. Furthermore, a comparison of measured data from the COBRA demonstrator setup with Monte Carlo data was used to calculate reliable contamination levels of the single setup parts. The calculation was performed conservatively to prevent an underestimation. In addition, the contribution to the total background rate regarding the individual detector parts and background sources was investigated. The main portion arise from the Delrin support structure, the Glyptal lacquer followed by the circuit board of the high voltage supply. Most background events originate from particles with a quantity of 99 % in total. Regarding surface events a
International Nuclear Information System (INIS)
Franklin, Robson L.
2011-01-01
The Rio Grande reservoir is located in the Metropolitan area of Sao Paulo and it is used for recreation purposes and as source water for drinking water production. During the last decades has been detected mercury contamination in the sediments of this reservoir, mainly in the eastern part, near the main affluent of the reservoir, in the Rio Grande da Serra and Ribeirao Pires counties. In the present study bottom sediment samples were collected in four different sites into four sampling campaigns during the period of September 2008 to January 2010. The samples were dried at room temperature, ground and passed through a 2 mm sieve. Total Hg determination in the sediment samples was carried out by two different analytical techniques: neutron activation analysis (NAA) and cold vapor atomic absorption spectrometry (CV AAS). The methodology validation, in terms of precision and accuracy, was performed by reference materials, and presented a recovery of 83 to 108%. The total Hg results obtained by both analytical techniques ranged from 3 to 71 mg kg-1 and were considered similar by statistical analysis, even though NAA technique furnishes the total concentration while CV AAS using the 3015 digestion procedure characterizes only the bioavailable Hg. These results confirm that both analytical techniques were suitable to detect the Hg concentration levels in the Rio Grande sediments studied. The Hg levels in the sediment of the Rio Grande reservoir confirm the anthropogenic origin for this element in this ecosystem. (author)
Directory of Open Access Journals (Sweden)
Wojciech Rosloniec
2010-01-01
Full Text Available The TLS ESPRIT method is investigated in application to estimation of angular coordinates (angles of arrival of two moving objects at the presence of an external, relatively strong uncorrelated signal. As a radar antenna system, the 32-element uniform linear array (ULA is used. Various computer simulations have been carried out in order to demonstrate good accuracy and high spatial resolution of the TLS ESPRIT method in the scenario outlined above. It is also shown that accuracy and angle resolution can be significantly increased by using the proposed preprocessing (beamforming. The most of simulation results, presented in a graphical form, have been compared to the corresponding equivalent results obtained by using the ESPRIT method and conventional amplitude monopulse method aided by the coherent Doppler filtration.
DEFF Research Database (Denmark)
Skousgaard, Søren Glud; Hjelmborg, Jacob; Skytthe, Axel
2015-01-01
INTRODUCTION: Primary hip osteoarthritis, radiographic as well as symptomatic, is highly associated with increasing age in both genders. However, little is known about the mechanisms behind this, in particular if this increase is caused by genetic factors. This study examined the risk and heritab......INTRODUCTION: Primary hip osteoarthritis, radiographic as well as symptomatic, is highly associated with increasing age in both genders. However, little is known about the mechanisms behind this, in particular if this increase is caused by genetic factors. This study examined the risk...... and heritability of primary osteoarthritis of the hip leading to a total hip arthroplasty, and if this heritability increased with increasing age. METHODS: In a nationwide population-based follow-up study 118,788 twins from the Danish Twin Register and 90,007 individuals from the Danish Hip Arthroplasty Register...... not have had a total hip arthroplasty at the time of follow-up. RESULTS: There were 94,063 twins eligible for analyses, comprising 835 cases of 36 concordant and 763 discordant twin pairs. The probability increased particularly from 50 years of age. After sex and age adjustment a significant additive...
Estimating the Total Economic Value of Cultivated Flower Land in Taiwan
Directory of Open Access Journals (Sweden)
Chin-Huang Huang
2015-04-01
Full Text Available Many arable land areas have been converted to residential or business uses by Taiwan government authorities, because the low farmland value is associated with the low value of agricultural products. However, agriculture is multifunctional. This study investigates farmland value through Total Economic Value (TEV for Tianwei Township, which is Taiwan’s largest floral farmland region. Direct use value measures the floral products’ output value and recreational benefit. Recreational benefit from visitors’ flower sightseeing was measured by the travel cost method (TCM. Option value and non-use value, including bequest value and existence value, measure the residents’ willingness to pay through the double-bounded dichotomous contingent valuation method (CVM. The results show that the total floral products’ output is NT$1.441 billion in 2007, recreational benefit is roughly NT$17.757 billion. The intangible value of option value and non-use values are approximately between NT$5 million to 15 million. Therefore, ignoring various values of farmland might lead to an underestimation of farmland value.
Estimate of total CO2 output from desertified sandy land in China
International Nuclear Information System (INIS)
Duan Zhenghu; Lanzhou University; Xiao Honglang; Dong Zhibao; He Xingdong; Wang Gang
2001-01-01
Soil is an important factor in regional and global carbon budgets because it serves as a reservoir of large amount of organic carbon. In our study, using remote sensing data of different periods we analyzed the development and reversion of desertification in China, calculated the variations of organic carbon contents of the desertified lands in China. The results showed that the total storage of organic carbon in 0-50cm soil layer of the desertified lands is 855Mt. In recent 40yr, the total CO 2 amount released by land desertification processes to the atmosphere was 150Mt, while the CO 2 amount sequestered by desertification reversing processes corresponded to 59MtC. Hence, the net CO 2 amount released from desertified lands of China corresponded to 91MtC, about 68.42% of the 133MtC of annual CO 2 release in the global temperate and frigid zones. Simultaneously, it indicated that CO 2 amount sequestered by desertification reversing processes in desertified land had greater potential than the other soils. (Author)
Directory of Open Access Journals (Sweden)
Michel G Arsenault
2014-06-01
Full Text Available Congenital Anomalies of the Kidney and Urinary Tract (CAKUT are a polymorphic group of clinical disorders comprising the major cause of renal failure in children. Included within CAKUT is a wide spectrum of developmental malformations ranging from renal agenesis, renal hypoplasia and renal dysplasia (maldifferentiation of renal tissue, each characterized by varying deficits in nephron number. First presented in the Brenner Hypothesis, low congenital nephron endowment is becoming recognized as an antecedent cause of adult-onset hypertension, a leading cause of coronary heart disease, stroke, and renal failure in North America. Genetic mouse models of impaired nephrogenesis and nephron endowment provide a critical framework for understanding the origins of human kidney disease. Current methods to quantitate nephron number include (i acid maceration (ii estimation of nephron number from a small number of tissue sections (iii imaging modalities such as MRI and (iv the gold standard physical disector/fractionator method. Despite its accuracy, the physical disector/fractionator method is rarely employed because it is labour-intensive, time-consuming and costly to perform. Consequently, less rigourous methods of nephron estimation are routinely employed by many laboratories. Here we present an updated, digitized version of the physical disector/fractionator method using free open source Fiji software, which we have termed the integrated disector method. This updated version of the gold standard modality accurately, rapidly and cost-effectively quantitates nephron number in embryonic and post-natal mouse kidneys, and can be easily adapted for stereological measurements in other organ systems.
Directory of Open Access Journals (Sweden)
Belachew Gizachew
2016-06-01
Full Text Available Abstract Background A functional forest carbon measuring, reporting and verification (MRV system to support climate change mitigation policies, such as REDD+, requires estimates of forest biomass carbon, as an input to estimate emissions. A combination of field inventory and remote sensing is expected to provide those data. By linking Landsat 8 and forest inventory data, we (1 developed linear mixed effects models for total living biomass (TLB estimation as a function of spectral variables, (2 developed a 30 m resolution map of the total living carbon (TLC, and (3 estimated the total TLB stock of the study area. Inventory data consisted of tree measurements from 500 plots in 63 clusters in a 15,700 km2 study area, in miombo woodlands of Tanzania. The Landsat 8 data comprised two climate data record images covering the inventory area. Results We found a linear relationship between TLB and Landsat 8 derived spectral variables, and there was no clear evidence of spectral data saturation at higher biomass values. The root-mean-square error of the values predicted by the linear model linking the TLB and the normalized difference vegetation index (NDVI is equal to 44 t/ha (49 % of the mean value. The estimated TLB for the study area was 140 Mt, with a mean TLB density of 81 t/ha, and a 95 % confidence interval of 74–88 t/ha. We mapped the distribution of TLC of the study area using the TLB model, where TLC was estimated at 47 % of TLB. Conclusion The low biomass in the miombo woodlands, and the absence of a spectral data saturation problem suggested that Landsat 8 derived NDVI is suitable auxiliary information for carbon monitoring in the context of REDD+, for low-biomass, open-canopy woodlands.
Weinstock, B André; Guiney, Linda M; Loose, Christopher
2012-11-01
We have developed a rapid, nondestructive analytical method that estimates the thickness of a surface polymer layer with high precision but unknown accuracy using a single attenuated total reflection Fourier transform infrared (ATR FT-IR) measurement. Because the method is rapid, nondestructive, and requires no sample preparation, it is ideal as a process analytical technique. Prior to implementation, the ATR FT-IR spectrum of the substrate layer pure component and the ATR FT-IR and real refractive index spectra of the surface layer pure component must be known. From these three input spectra a synthetic mid-infrared spectral matrix of surface layers 0 nm to 10,000 nm thick on substrate is created de novo. A minimum statistical distance match between a process sample's ATR FT-IR spectrum and the synthetic spectral matrix provides the thickness of that sample. We show that this method can be used to successfully estimate the thickness of polysulfobetaine surface modification, a hydrated polymeric surface layer covalently bonded onto a polyetherurethane substrate. A database of 1850 sample spectra was examined. Spectrochemical matrix-effect unknowns, such as the nonuniform and molecularly novel polysulfobetaine-polyetherurethane interface, were found to be minimal. A partial least squares regression analysis of the database spectra versus their thicknesses as calculated by the method described yielded an estimate of precision of ±52 nm.
Arnold, Mark E; Mueller-Doblies, Doris; Gosling, Rebecca J; Martelli, Francesca; Davies, Robert H
2015-01-01
Reports of Salmonella in ducks in the UK currently rely upon voluntary submissions from the industry, and as there is no harmonized statutory monitoring and control programme, it is difficult to compare data from different years in order to evaluate any trends in Salmonella prevalence in relation to sampling methodology. Therefore, the aim of this project was to assess the sensitivity of a selection of environmental sampling methods, including the sampling of faeces, dust and water troughs or bowls for the detection of Salmonella in duck flocks, and a range of sampling methods were applied to 67 duck flocks. Bayesian methods in the absence of a gold standard were used to provide estimates of the sensitivity of each of the sampling methods relative to the within-flock prevalence. There was a large influence of the within-flock prevalence on the sensitivity of all sample types, with sensitivity reducing as the within-flock prevalence reduced. Boot swabs (individual and pool of four), swabs of faecally contaminated areas and whole house hand-held fabric swabs showed the overall highest sensitivity for low-prevalence flocks and are recommended for use to detect Salmonella in duck flocks. The sample type with the highest proportion positive was a pool of four hair nets used as boot swabs, but this was not the most sensitive sample for low-prevalence flocks. All the environmental sampling types (faeces swabs, litter pinches, drag swabs, water trough samples and dust) had higher sensitivity than individual faeces sampling. None of the methods consistently identified all the positive flocks, and at least 10 samples would be required for even the most sensitive method (pool of four boot swabs) to detect a 5% prevalence. The sampling of dust had a low sensitivity and is not recommended for ducks.
Estimating total solar radiation in different climatological of region in Iran using cloud factor
International Nuclear Information System (INIS)
Jafarpour, Kh.; Karshenas, M.
2002-01-01
Iran is among the countries located on the belt pertaining to lands with a high rate of solar insolation. Statistics shows that, for instance, the solar energy which hi ted the Iranian contention al land just in the year of 1990, was more than 1600 times that of the energy exported by Iran in the same year. This high rate of solar insolation, on the one hand and the limitation of fossil-fuel reservoirs (specially, utilizing energy from such sources is polluting the environment) on the other hand, show that harnessing the solar energy is not anymore a choice of decision but rather on obligation. To fulfill this obligation one needs solar insolation data to be able to design and evaluate solar energy utilizing systems and other uses under different climatological conditions of Iran. As a first step, this article provides total solar radiation data for various cities in Iran under different climatological conditions using cloud factor as a parameter
A differential absorption technique to estimate atmospheric total water vapor amounts
Frouin, Robert; Middleton, Elizabeth
1990-01-01
Vertically integrated water-vapor amounts can be remotely determined by measuring the solar radiance reflected by the earth's surface with satellites or aircraft-based instruments. The technique is based on the method by Fowle (1912, 1913) and utilizes the 0.940-micron water-vapor band to retrieve total-water-vapor data that is independent of surface reflectance properties and other atmospheric constituents. A channel combination is proposed to provide more accurate results, the SE-590 spectrometer is used to verify the data, and the effects of atmospheric photon backscattering is examined. The spectrometer and radiosonde data confirm the accuracy of using a narrow and a wide channel centered on the same wavelength to determine water vapor amounts. The technique is suitable for cloudless conditions and can contribute to atmospheric corrections of land-surface parameters.
Estimation of the biserial correlation and its sampling variance for use in meta-analysis.
Jacobs, Perke; Viechtbauer, Wolfgang
2017-06-01
Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators
International Nuclear Information System (INIS)
Flammia, Steven T; Gross, David; Liu, Yi-Kai; Eisert, Jens
2012-01-01
Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators—the matrix Dantzig selector and the matrix Lasso—with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low-rank estimate using direct fidelity estimation, and describe a method for compressed quantum process tomography that works for processes with small Kraus rank and requires only Pauli eigenstate preparations
Automated CBED processing: Sample thickness estimation based on analysis of zone-axis CBED pattern
Energy Technology Data Exchange (ETDEWEB)
Klinger, M., E-mail: klinger@post.cz; Němec, M.; Polívka, L.; Gärtnerová, V.; Jäger, A.
2015-03-15
An automated processing of convergent beam electron diffraction (CBED) patterns is presented. The proposed methods are used in an automated tool for estimating the thickness of transmission electron microscopy (TEM) samples by matching an experimental zone-axis CBED pattern with a series of patterns simulated for known thicknesses. The proposed tool detects CBED disks, localizes a pattern in detected disks and unifies the coordinate system of the experimental pattern with the simulated one. The experimental pattern is then compared disk-by-disk with a series of simulated patterns each corresponding to different known thicknesses. The thickness of the most similar simulated pattern is then taken as the thickness estimate. The tool was tested on [0 1 1] Si, [0 1 0] α-Ti and [0 1 1] α-Ti samples prepared using different techniques. Results of the presented approach were compared with thickness estimates based on analysis of CBED patterns in two beam conditions. The mean difference between these two methods was 4.1% for the FIB-prepared silicon samples, 5.2% for the electro-chemically polished titanium and 7.9% for Ar{sup +} ion-polished titanium. The proposed techniques can also be employed in other established CBED analyses. Apart from the thickness estimation, it can potentially be used to quantify lattice deformation, structure factors, symmetry, defects or extinction distance. - Highlights: • Automated TEM sample thickness estimation using zone-axis CBED is presented. • Computer vision and artificial intelligence are employed in CBED processing. • This approach reduces operator effort, analysis time and increases repeatability. • Individual parts can be employed in other analyses of CBED/diffraction pattern.
Directory of Open Access Journals (Sweden)
Luiz Januário Magalhães Aroeira
2010-09-01
Full Text Available It was aimed to evaluate the effectiveness of the internal markers: indigestible fibers (FDAi and FDNi, lignin Klason and the external markers: chromic oxide and modified enriched hidroxiphenilpropan LIPE® on the total intake estimates of penned crossbreed heifers (Holstein x Zebu. They´re assigned to four diets: elephant grass (Pennisetum purpureum Schum. silage; elephant grass silage and commercial concentrate; chopped sugar cane and urea; chopped sugar cane, urea and commercial concentrate. The chromic oxide underestimated the heifers consumption of all diets and its estimates differed from either hod consumption or those obtained with markers. LIPE® may replace chromic oxide because its consumption estimates did not differ from hod consumption of all diets. The lignin Klason showed to be more appropriate to estimate the heifers consumption that were fed with diets based on grass silage than those heifers fed with sugarcane. This marker underestimated the consumption of heifers that received sugarcane and urea (3,57kg/day of MS when it was compared to the consumption registered in hod (4,05kg/day of MS, however, for those heifers that received sugarcane, urea and supply, lignin Klason underestimated the consumption (3,90kg/day of MS, so that, it differed form consumption in hod (4,90kg/day of MS. The indigestible fibers (FDAi and FDNi were suitable to estimate the heifers consumption in all diets. Those results showed that markers present differentiated effect according to the roughage used.Objetivou-se avaliar a eficácia dos indicadores internos: fibras indigestíveis (FDAi e FDNi e lignina Klason e os indicadores externos: óxido crômico e hidroxifenilpropano enriquecido e modificado LIPE® nas estimativas de consumo total de novilhas mestiças Holandês x Zebu, mantidas em confinamento e submetidas a quatro dietas: silagem de capim elefante (Pennisetum purpureum Schum.; silagem de capim elefante e concentrado comercial; cana
Multiple sensitive estimation and optimal sample size allocation in the item sum technique.
Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz
2018-01-01
For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directory of Open Access Journals (Sweden)
Jacinta Dugbaza
2012-01-01
Full Text Available Mandatory folic acid fortification of wheat flour for making bread was implemented in Australia in September 2009, to improve the dietary folate status of women of child-bearing age, and help reduce the incidence of neural tube defects in the population. This paper presents estimates of folic acid intake in the target population and other subgroups of the Australian population following implementation of the mandatory folic acid fortification standard. In June/July 2010 one hundred samples from seven bread categories were purchased from around the country and individually analysed for the amount of folic acid they contained. A modification to the triple enzyme microbiological method was used to measure folic acid in the individual bread samples. The folic acid analytical values together with national food consumption data were used to generate estimates of the population’s folic acid intake from fortified foods. Food Standards Australia New Zealand’s (FSANZ custom-built dietary modelling program (DIAMOND was used for the estimates. The mean amount of folic acid found in white bread was 200 μg/100 g which demonstrated that folic-acid-fortified wheat flour was used to bake the bread. The intake estimates indicated an increase in mean folic acid intake of 159 μg per day for the target group. Other sub-groups of the population also showed increases in estimated mean daily intake of folic acid.
International Nuclear Information System (INIS)
SIMPSON, B.C.
1999-01-01
In March 1999, staff at Lockheed Makn Hanford Company (LMHC) were asked to make a presentation to the Defense Nuclear Facilities Safety Board (DNFSB) about the safety of the waste tanks at the Hanford Site and the necessity for further tank sampling. Pacific Northwest National Laboratory provided a statistical analysis of available tank data to help determine whether additional sampling would in fact be required. The analytes examined were total alpha, energetics, total organic carbon (TOC), oxalate as TOC and moisture. These analytes serve as indicators of the stability of tank contents; if any of them fall above or below certain values, further investigation is warranted (Dukelow et al. 1995). PNNL performed an analysis of the data collected on these safety screening analytes with respect to empirical distributions and the established Safety Screening Data Quality Objectives (SS DQO) thresholds and Basis for Interim Operations (BIO) limits. Both univariate and bivariate analyses were performed. Summary statistics and graphical representations of the data were generated
International Nuclear Information System (INIS)
Mueller-Suur, R.; Magnusson, G.; Karolinska Inst., Stockholm; Bois-Svensson, I.; Jansson, B.
1991-01-01
Recent studies have shown that technetium 99m mercaptoacetyltriglycine (MAG-3) is a suitable replacement for iodine 131 or 123 hippurate in gamma-camera renography. Also, the determination of its clearance is of value, since it correlates well with that of hippurate and thus may be an indirect measure of renal plasma flow. In order to simplify the clearance method we developed formulas for the estimation of plasma clearance of MAG-3 based on a single plasma sample and compared them with the multiple sample method based on 7 plasma samples. The correlation to effective renal plasma flow (ERPF) (according to Tauxe's method, using iodine 123 hippurate), which ranged from 75 to 654 ml/min per 1.73 m 2 , was determined in these patients. Using the developed regression equations the error of estimate for the simplified clearance method was acceptably low (18-14 ml/min), when the single plasma sample was taken 44-64 min post-injection. Formulas for different sampling times at 44, 48, 52, 56, 60 and 64 min are given, and we recommend 60 min as optimal, with an error of estimate of 15.5 ml/min. The correlation between the MAG-3 clearances and ERPF was high (r=0.90). Since normal values for MAG-3 clearance are not yet available, transformation to estimated ERPF values by the regression equation (ERPF=1.86xC MAG-3 +4.6) could be of clinical value in order to compare it with the normal values for ERPF given in the literature. (orig.)
An efficient modularized sample-based method to estimate the first-order Sobol' index
International Nuclear Information System (INIS)
Li, Chenzhao; Mahadevan, Sankaran
2016-01-01
Sobol' index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol' index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol' index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap. - Highlights: • An efficient method to estimate the first-order Sobol' index. • Estimate the index from input–output samples directly. • Computational cost is not proportional to the number of model inputs. • Handle both uncorrelated and correlated model inputs.
Directory of Open Access Journals (Sweden)
Jin-Ge Zhao
2016-03-01
Full Text Available Background: Silk sericin and a few non-protein components isolated from the cocoon layer including two silk proteins in silkworm Bombyx mori has many bioactivities. The dietary sericin possess antinatural oxidation, anticancer, antihyperlipidemic, and antidiabetic activities. The non-protein components surrounding the sericin layer involve in wax, pigments mainly meaning flavonoids, sugars, and other impurities. However, very few investigations have reported the estimation of the total flavonoids derived from the cocoon layer. The flavonoids are commonly present in their glycosylated forms and mostly exist as quercetin glycosides in the sericin layers of silkworm cocoons. Objective: The aim of this study was to find a more accurate method to estimate the level of the total flavonoids in silkworm cocoons. Design: An efficient procedure of hydrolysis-assisted extraction (HAE was first established to estimate the level of the total flavonoids through the determination of their aglycones, quercetin, and kaempferol. Then, a comparison was made between traditional colorimetric method and our method. In addition, the antioxidant activities of hydrolysis-assisted extract sample were determined. Results: The average contents of quercetin and kaempferol were 1.98 and 0.42 mg/g in Daizo cocoon. Their recoveries were 99.56 and 99.17%. The total sum of quercetin and kaempferol was detected to be 2.40±0.07 mg/g by HAE-HPLC, while the total flavonoids (2.59±0.48 mg/g estimated by the traditional colorimetric method were only equivalent to 1.28±0.04 mg/g of quercetin. The HAE sample also exhibits that IC50 values of scavenging ability of diphenyl picryl hydrazinyl (DPPH radical and hydroxyl radical (HO· are 243.63 µg/mL and 4.89 mg/mL, respectively. Conclusions: These results show that the HAE-HPLC method is specificity of cocoon and far superior to the colorimetric method. Therefore, this study has profound significance for the comprehensive utilization
A total diet study to estimate dioxin-like compounds intake from Taiwan food
Energy Technology Data Exchange (ETDEWEB)
Hsu, M.S.; Wang, S.M.; Chou, U.; Chen, S.Y.; Huang, N.C.; Liao, G.Y.; Yu, T.P.; Ling, Y.C. [National Tsing Hua Univ., Hsinchu (Taiwan)
2004-09-15
Food is the major route of human intake of toxic dioxin-like compounds (DLCs), which include PolyChlorinated Dibenzo-p-Dioxins (PCDDs), PolyChlorinated Dibenzo-p-Furans (PCDFs), and PolyChlorinated Biphenyls (PCBs). Approximately 95% of human DLCs exposure derives from food, with nearly 80% coming from food of animal origin. The DLCs levels in foodstuffs and the food consumption rate are essential to evaluate health risk posing to humans. The lack of DLCs levels in food increases the population's risk to DLCs exposure. The Department of Health, Taiwan has entrusted us to conduct a comprehensive monitoring program on PCDD/Fs levels in Taiwan food (not including plant origin) in 2001 and 2002, In 2003, the program has extended the analytes to include 12 WHO-PCBs. A total diet study (TDS) of DLCs intake from Taiwan food is, therefore, conducted for the first time. The DLCs concentrations in food of animal origin and the food consumption rate are collected. The average daily intake (ADI) and average weekly intake (AWI) of DLCs from food by Taiwan adults is determined.
Jakubec, Petr; Bancirova, Martina; Halouzka, Vladimir; Lojek, Antonin; Ciz, Milan; Denev, Petko; Cibicek, Norbert; Vacek, Jan; Vostalova, Jitka; Ulrichova, Jitka; Hrbac, Jan
2012-08-15
This work describes the method for total antioxidant capacity (TAC) and/or total content of phenolics (TCP) analysis in wines using microdialysis online-coupled with amperometric detection using a carbon microfiber working electrode. The system was tested on 10 selected wine samples, and the results were compared with total reactive antioxidant potential (TRAP), oxygen radical absorbance capacity (ORAC), and chemiluminescent determination of total antioxidant capacity (CL-TAC) methods using Trolox and catechin as standards. Microdialysis online-coupled with amperometric detection gives similar results to the widely used cyclic voltammetry methodology and closely correlates with ORAC and TRAP. The problem of electrode fouling is overcome by the introduction of an electrochemical cleaning step (1-2 min at the potential of 0 V vs Ag/AgCl). Such a procedure is sufficient to fully regenerate the electrode response for both red and white wine samples as well as catechin/Trolox standards. The appropriate size of microdialysis probes enables easy automation of the electrochemical TAC/TCP measurement using 96-well microtitration plates.
Novikov, I; Fund, N; Freedman, L S
2010-01-15
Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.
A new method in estimation of total hexavalent chromium in Portland pozzolan cement
International Nuclear Information System (INIS)
Sharma, R.; Sharma, D.
2017-01-01
Variamine blue was used first time for the detection of hexavalent chromium from cement samples. In present method, cement was treated sequentially with water, sulphate and carbonate buffer to extract soluble, sparingly soluble and insoluble hexavalent chromium respectively. Extracted Cr (VI) was determined using variamine blue as chromogenic reagent. The determination is based on the reaction of hexavalent chromium with potassium iodide in an acid medium to liberate iodine. This oxidizes variamine blue to form a violet coloured species having an absorption to maximum at 556 nm. Energy-dispersive X-ray spectroscopy (EDX) and Infrared Spectroscopy (IR) confirmed the complete extraction of hexavalent chromium by sequential extraction process. SRM 2701 (Reference material from NIST, USA) was used for revalidating the results. The percentage of recovery for proposed and reference method (diphelycarbazide method) varied from 98.5 to 101 and 97.5 to 100.5. Whereas, their relative error percentage varied from -1.5 to 0.33 and -2.5 to 0.5. [es
Estimates of the Tempo-adjusted Total Fertility Rate in Western and Eastern Germany, 1955-2008
Directory of Open Access Journals (Sweden)
Marc Luy
2011-09-01
Full Text Available In this article we present estimates of the tempo-adjusted total fertility rate in Western and Eastern Germany from 1955 to 2008. Tempo adjustment of the total fertility rate (TFR requires data on the annual number of births by parity and age of the mother. Since official statistics do not provide such data for West Germany as well as Eastern Germany from 1990 on we used alternative data sources which include these specific characteristics. The combined picture of conventional TFR and tempo-adjusted TFR* provides interesting information about the trends in period fertility in Western and Eastern Germany, above all with regard to the differences between the two regions and the enormous extent of tempo effects in Eastern Germany during the 1990s. Compared to corresponding data for populations from other countries, our estimates of the tempo-adjusted TFR* for Eastern and Western Germany show plausible trends. Nevertheless, it is important to note that the estimates of the tempo-adjusted total fertility rate presented in this paper should not be seen as being on the level of or equivalent to official statistics since they are based on different kinds of data with different degrees of quality.
Total column water vapor estimation over land using radiometer data from SAC-D/Aquarius
Epeloa, Javier; Meza, Amalia
2018-02-01
The aim of this study is retrieving atmospheric total column water vapor (CWV) over land surfaces using a microwave radiometer (MWR) onboard the Scientific Argentine Satellite (SAC-D/Aquarius). To research this goal, a statistical algorithm is used for the purpose of filtering the study region according to the climate type. A log-linear relationship between the brightness temperatures of the MWR and CWV obtained from Global Navigation Satellite System (GNSS) measurements was used. In this statistical algorithm, the retrieved CWV is derived from the Argentinian radiometer's brightness temperature which works at 23.8 GHz and 36.5 GHz, and taking into account CWVs observed from GNSS stations belonging to a region sharing the same climate type. We support this idea, having found a systematic effect when applying the algorithm; it was generated for one region using the previously mentioned criteria, however, it should be applied to additional regions, especially those with other climate types. The region we analyzed is in the Southeastern United States of America, where the climate type is Cfa (Köppen - Geiger classification); this climate type includes moist subtropical mid-latitude climates, with hot, muggy summers and frequent thunderstorms. However, MWR only contains measurements taken from over ocean surfaces; therefore the determination of water vapor over land is an important contribution to extend the use of the SAC-D/Aquarius radiometer measurements beyond the ocean surface. The CWVs computed by our algorithm are compared against radiosonde CWV observations and show a bias of about -0.6 mm, a root mean square (rms) of about 6 mm and a correlation of 0.89.
Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.
Joo, Hyun; Chavan, Archana G; Day, Ryan; Lennox, Kristin P; Sukhanov, Paul; Dahl, David B; Vannucci, Marina; Tsai, Jerry
2011-10-01
Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.
Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.
Directory of Open Access Journals (Sweden)
Hyun Joo
2011-10-01
Full Text Available Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM. Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å, this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.
Near-Native Protein Loop Sampling Using Nonparametric Density Estimation Accommodating Sparcity
Day, Ryan; Lennox, Kristin P.; Sukhanov, Paul; Dahl, David B.; Vannucci, Marina; Tsai, Jerry
2011-01-01
Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/. PMID:22028638
Estimation of sampling error uncertainties in observed surface air temperature change in China
Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun
2017-08-01
This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.
Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.
Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan
2013-06-01
The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.
A simple nomogram for sample size for estimating sensitivity and specificity of medical tests
Directory of Open Access Journals (Sweden)
Malhotra Rajeev
2010-01-01
Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.
2014-06-17
100 0 2 4 Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function 0 50 100 0 2 4 L- Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function ...bilinear or higher order autocorrelation functions will increase the number of missing samples, the analysis shows that accurate instantaneous...frequency estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with
Effects of sampling conditions on DNA-based estimates of American black bear abundance
Laufenberg, Jared S.; Van Manen, Frank T.; Clark, Joseph D.
2013-01-01
DNA-based capture-mark-recapture techniques are commonly used to estimate American black bear (Ursus americanus) population abundance (N). Although the technique is well established, many questions remain regarding study design. In particular, relationships among N, capture probability of heterogeneity mixtures A and B (pA and pB, respectively, or p, collectively), the proportion of each mixture (π), number of capture occasions (k), and probability of obtaining reliable estimates of N are not fully understood. We investigated these relationships using 1) an empirical dataset of DNA samples for which true N was unknown and 2) simulated datasets with known properties that represented a broader array of sampling conditions. For the empirical data analysis, we used the full closed population with heterogeneity data type in Program MARK to estimate N for a black bear population in Great Smoky Mountains National Park, Tennessee. We systematically reduced the number of those samples used in the analysis to evaluate the effect that changes in capture probabilities may have on parameter estimates. Model-averaged N for females and males were 161 (95% CI = 114–272) and 100 (95% CI = 74–167), respectively (pooled N = 261, 95% CI = 192–419), and the average weekly p was 0.09 for females and 0.12 for males. When we reduced the number of samples of the empirical data, support for heterogeneity models decreased. For the simulation analysis, we generated capture data with individual heterogeneity covering a range of sampling conditions commonly encountered in DNA-based capture-mark-recapture studies and examined the relationships between those conditions and accuracy (i.e., probability of obtaining an estimated N that is within 20% of true N), coverage (i.e., probability that 95% confidence interval includes true N), and precision (i.e., probability of obtaining a coefficient of variation ≤20%) of estimates using logistic regression. The capture probability
Sex Estimation From Modern American Humeri and Femora, Accounting for Sample Variance Structure
DEFF Research Database (Denmark)
Boldsen, J. L.; Milner, G. R.; Boldsen, S. K.
2015-01-01
several decades. Results: For measurements individually and collectively, the probabilities of being one sex or the other were generated for samples with an equal distribution of males and females, taking into account the variance structure of the original measurements. The combination providing the best......Objectives: A new procedure for skeletal sex estimation based on humeral and femoral dimensions is presented, based on skeletons from the United States. The approach specifically addresses the problem that arises from a lack of variance homogeneity between the sexes, taking into account prior...... information about the sample's sex ratio, if known. Material and methods: Three measurements useful for estimating the sex of adult skeletons, the humeral and femoral head diameters and the humeral epicondylar breadth, were collected from 258 Americans born between 1893 and 1980 who died within the past...
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes
Bundy, Brian; Krischer, Jeffrey P.
2016-01-01
The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448
A model-based approach to sample size estimation in recent onset type 1 diabetes.
Bundy, Brian N; Krischer, Jeffrey P
2016-11-01
The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Beer, M.
1980-01-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates
Estimation variance bounds of importance sampling simulations in digital communication systems
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
Clinical usefulness of limited sampling strategies for estimating AUC of proton pump inhibitors.
Niioka, Takenori
2011-03-01
Cytochrome P450 (CYP) 2C19 (CYP2C19) genotype is regarded as a useful tool to predict area under the blood concentration-time curve (AUC) of proton pump inhibitors (PPIs). In our results, however, CYP2C19 genotypes had no influence on AUC of all PPIs during fluvoxamine treatment. These findings suggest that CYP2C19 genotyping is not always a good indicator for estimating AUC of PPIs. Limited sampling strategies (LSS) were developed to estimate AUC simply and accurately. It is important to minimize the number of blood samples because of patient's acceptance. This article reviewed the usefulness of LSS for estimating AUC of three PPIs (omeprazole: OPZ, lansoprazole: LPZ and rabeprazole: RPZ). The best prediction formulas in each PPI were AUC(OPZ)=9.24 x C(6h)+2638.03, AUC(LPZ)=12.32 x C(6h)+3276.09 and AUC(RPZ)=1.39 x C(3h)+7.17 x C(6h)+344.14, respectively. In order to optimize the sampling strategy of LPZ, we tried to establish LSS for LPZ using a time point within 3 hours through the property of pharmacokinetics of its enantiomers. The best prediction formula using the fewest sampling points (one point) was AUC(racemic LPZ)=6.5 x C(3h) of (R)-LPZ+13.7 x C(3h) of (S)-LPZ-9917.3 x G1-14387.2×G2+7103.6 (G1: homozygous extensive metabolizer is 1 and the other genotypes are 0; G2: heterozygous extensive metabolizer is 1 and the other genotypes are 0). Those strategies, plasma concentration monitoring at one or two time-points, might be more suitable for AUC estimation than reference to CYP2C19 genotypes, particularly in the case of coadministration of CYP mediators.
Li, Longhai; Feng, Cindy X; Qiu, Shi
2017-06-30
An important statistical task in disease mapping problems is to identify divergent regions with unusually high or low risk of disease. Leave-one-out cross-validatory (LOOCV) model assessment is the gold standard for estimating predictive p-values that can flag such divergent regions. However, actual LOOCV is time-consuming because one needs to rerun a Markov chain Monte Carlo analysis for each posterior distribution in which an observation is held out as a test case. This paper introduces a new method, called integrated importance sampling (iIS), for estimating LOOCV predictive p-values with only Markov chain samples drawn from the posterior based on a full data set. The key step in iIS is that we integrate away the latent variables associated the test observation with respect to their conditional distribution without reference to the actual observation. By following the general theory for importance sampling, the formula used by iIS can be proved to be equivalent to the LOOCV predictive p-value. We compare iIS and other three existing methods in the literature with two disease mapping datasets. Our empirical results show that the predictive p-values estimated with iIS are almost identical to the predictive p-values estimated with actual LOOCV and outperform those given by the existing three methods, namely, the posterior predictive checking, the ordinary importance sampling, and the ghosting method by Marshall and Spiegelhalter (2003). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Sueli A. Mingoti
2001-06-01
Full Text Available Consumers surveys are conducted very often by many companies with the main objective of obtaining information about the opinions the consumers have about a specific prototype, product or service. In many situations the goal is to identify the characteristics that are considered important by the consumers when taking the decision of buying or using the products or services. When the survey is performed some characteristics that are present in the consumers population might not be reported by those consumers in the observed sample. Therefore, some important characteristics of the product according to the consumers opinions could be missing in the observed sample. The main objective of this paper is to show how the amount of characteristics missing in the observed sample could be easily estimated by using some Bayesian estimators proposed by Mingoti & Meeden (1992 and Mingoti (1999. An example of application related to an automobile survey is presented.Pesquisas de mercado são conduzidas freqüentemente com o propósito de obter informações sobre a opinião dos consumidores em relação a produtos já existentes no mercado, protótipos, ou determinados tipos de serviços prestados pela empresa. Em muitas situações deseja-se identificar as características que são consideradas importantes pelos consumidores no que se refere à tomada de decisão de compra do produto ou de opção pelo serviço prestado pela empresa. Como as pesquisas são feitas com amostras de consumidores do mercado potencial, algumas características consideradas importantes pela população podem não estar representadas nas amostras. O objetivo deste artigo é mostrar como a quantidade de características presentes na população e que não estão representadas nas amostras, pode ser facilmente estimada através de estimadores Bayesianos propostos por Mingoti & Meeden (1992 e Mingoti (1999. Como ilustração apresentamos um exemplo de uma pesquisa de mercado sobre um
Estimates of laboratory accuracy and precision on Hanford waste tank samples
International Nuclear Information System (INIS)
Dodd, D.A.
1995-01-01
A review was performed on three sets of analyses generated in Battelle, Pacific Northwest Laboratories and three sets generated by Westinghouse Hanford Company, 222-S Analytical Laboratory. Laboratory accuracy and precision was estimated by analyte and is reported in tables. The sources used to generate this estimate is of limited size but does include the physical forms, liquid and solid, which are representative of samples from tanks to be characterized. This estimate was published as an aid to programs developing data quality objectives in which specified limits are established. Data resulting from routine analyses of waste matrices can be expected to be bounded by the precision and accuracy estimates of the tables. These tables do not preclude or discourage direct negotiations between program and laboratory personnel while establishing bounding conditions. Programmatic requirements different than those listed may be reliably met on specific measurements and matrices. It should be recognized, however, that these are specific to waste tank matrices and may not be indicative of performance on samples from other sources
Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.
Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca
2018-02-01
The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.
Integration of sampling based battery state of health estimation method in electric vehicles
International Nuclear Information System (INIS)
Ozkurt, Celil; Camci, Fatih; Atamuradov, Vepa; Odorry, Christopher
2016-01-01
Highlights: • Presentation of a prototype system with full charge discharge cycling capability. • Presentation of SoH estimation results for systems degraded in the lab. • Discussion of integration alternatives of the presented method in EVs. • Simulation model based on presented SoH estimation for a real EV battery system. • Optimization of number of battery cells to be selected for SoH test. - Abstract: Battery cost is one of the crucial parameters affecting high deployment of Electric Vehicles (EVs) negatively. Accurate State of Health (SoH) estimation plays an important role in reducing the total ownership cost, availability, and safety of the battery avoiding early disposal of the batteries and decreasing unexpected failures. A circuit design for SoH estimation in a battery system that bases on selected battery cells and its integration to EVs are presented in this paper. A prototype microcontroller has been developed and used for accelerated aging tests for a battery system. The data collected in the lab tests have been utilized to simulate a real EV battery system. Results of accelerated aging tests and simulation have been presented in the paper. The paper also discusses identification of the best number of battery cells to be selected for SoH estimation test. In addition, different application options of the presented approach for EV batteries have been discussed in the paper.
International Nuclear Information System (INIS)
Shao Lijun; Gan Wuer; Su Qingde
2006-01-01
An atomic fluorescence spectrometry system for determination of total and inorganic mercury with electromagnetic induction-assisted heating on-line oxidation has been developed. Potassium peroxodisulphate was used as the oxidizing agent to decompose organomercury compounds. Depending on the temperature selected, inorganic or total mercury could be determined with the same manifold. Special accent was put on the study of the parameters influencing the on-line digestion efficiency. The tolerance to the interference of coexisting ions was carefully examined in this system. Under optimal conditions, the detection limits (3σ) were evaluated to be 2.9 ng l -1 for inorganic mercury and 2.6 ng l -1 for total mercury, respectively. The relative standard deviations for 10 replicate determinations of 1.0 μg l -1 Hg were 2.4 and 3.2% for inorganic mercury and total mercury, respectively. The proposed method was successfully applied to the determination of total and inorganic mercury in fish samples
Directory of Open Access Journals (Sweden)
Purna A. Chander
2014-02-01
Full Text Available Objective: To study the anthelmintic activity of Barleria buxifolia leaf and to estimate the total flavonoid content. Methods: The aqueous and ethanolic leaf extracts were prepared and these were analyzed for total flavonoid content by aluminium chloride colorimetric method and Pheretima posthuma was used for anthelmintic activity by using the different concentrations (10, 20, 40, 80 and 100 mg/mL. Results: All the investigational extracts showed an anthelmintic activity at concentration of 10 mg/mL. The ethanolic extract of 100 mg/mL has produced an significant effect (P<0.001 when compared to aqueous extract. The total flavonoid content was found to be 5.67 mg QE/100 g. Conclusions: From the above study, the leaf extract has shown a good anthelmintic activity.
Leong, Yin-Hui; Chiang, Pui-Nyuk; Jaafar, Hajjaj Juharullah; Gan, Chee-Yuen; Majid, Mohamed Isa Abdul
2014-04-01
A total of 126 food samples, categorised into three groups (seafood and seafood products, meat and meat products, as well as milk and dairy products) from Malaysia were analysed for polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs). The concentration of PCDD/Fs that ranged from 0.16 to 0.25 pg WHO05-TEQ g(-1) fw was found in these samples. According to the food consumption data from the Global Environment Monitoring System (GEMS) of the World Health Organization (WHO), the dietary exposures to PCDD/F from seafood and seafood products, meat and meat products, as well as milk and dairy products for the general population in Malaysia were 0.064, 0.183 and 0.736 pg WHO05-TEQ kg(-1) bw day(-1), respectively. However, the exposure was higher in seafood and seafood products (0.415 pg WHO05-TEQ kg(-1) bw day(-1)) and meat and meat products (0.317 pg WHO05-TEQ kg(-1) bw day(-1)) when the data were estimated using the Malaysian food consumption statistics. The lower exposure was observed in dairy products with an estimation of 0.365 pg WHO05-TEQ kg(-1) bw day(-1). Overall, these dietary exposure estimates were much lower than the tolerable daily intake (TDI) as recommended by WHO. Thus, it is suggested that the dietary exposure to PCDD/F does not represent a risk for human health in Malaysia.
Directory of Open Access Journals (Sweden)
Deborah P. Shutt
2017-12-01
Full Text Available As South and Central American countries prepare for increased birth defects from Zika virus outbreaks and plan for mitigation strategies to minimize ongoing and future outbreaks, understanding important characteristics of Zika outbreaks and how they vary across regions is a challenging and important problem. We developed a mathematical model for the 2015/2016 Zika virus outbreak dynamics in Colombia, El Salvador, and Suriname. We fit the model to publicly available data provided by the Pan American Health Organization, using Approximate Bayesian Computation to estimate parameter distributions and provide uncertainty quantification. The model indicated that a country-level analysis was not appropriate for Colombia. We then estimated the basic reproduction number to range between 4 and 6 for El Salvador and Suriname with a median of 4.3 and 5.3, respectively. We estimated the reporting rate to be around 16% in El Salvador and 18% in Suriname with estimated total outbreak sizes of 73,395 and 21,647 people, respectively. The uncertainty in parameter estimates highlights a need for research and data collection that will better constrain parameter ranges.
Directory of Open Access Journals (Sweden)
Masaharu Kagawa
2014-05-01
Full Text Available The aim of the study was to examine differences in total body water (TBW measured using single-frequency (SF and multi-frequency (MF modes of bioelectrical impedance spectroscopy (BIS in children and adults measured in different postures using the deuterium (2H dilution technique as the reference. Twenty-three boys and 26 adult males underwent assessment of TBW using the dilution technique and BIS measured in supine and standing positions using two frequencies of the SF mode (50 kHz and 100 kHz and the MF mode. While TBW estimated from the MF mode was comparable, extra-cellular fluid (ECF and intra-cellular fluid (ICF values differed significantly (p < 0.01 between the different postures in both groups. In addition, while estimated TBW in adult males using the MF mode was significantly (p < 0.01 greater than the result from the dilution technique, TBW estimated using the SF mode and prediction equation was significantly (p < 0.01 lower in boys. Measurement posture may not affect estimation of TBW in boys and adult males, however, body fluid shifts may still occur. In addition, technical factors, including selection of prediction equation, may be important when TBW is estimated from measured impedance.
Shutt, Deborah P; Manore, Carrie A; Pankavich, Stephen; Porter, Aaron T; Del Valle, Sara Y
2017-12-01
As South and Central American countries prepare for increased birth defects from Zika virus outbreaks and plan for mitigation strategies to minimize ongoing and future outbreaks, understanding important characteristics of Zika outbreaks and how they vary across regions is a challenging and important problem. We developed a mathematical model for the 2015/2016 Zika virus outbreak dynamics in Colombia, El Salvador, and Suriname. We fit the model to publicly available data provided by the Pan American Health Organization, using Approximate Bayesian Computation to estimate parameter distributions and provide uncertainty quantification. The model indicated that a country-level analysis was not appropriate for Colombia. We then estimated the basic reproduction number to range between 4 and 6 for El Salvador and Suriname with a median of 4.3 and 5.3, respectively. We estimated the reporting rate to be around 16% in El Salvador and 18% in Suriname with estimated total outbreak sizes of 73,395 and 21,647 people, respectively. The uncertainty in parameter estimates highlights a need for research and data collection that will better constrain parameter ranges. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Chai, Lilong; Kröbel, Roland; Janzen, H. Henry; Beauchemin, Karen A.; McGinn, Sean M.; Bittman, Shabtai; Atia, Atta; Edeogu, Ike; MacDonald, Douglas; Dong, Ruilan
2014-08-01
Animal feeding operations are primary contributors of anthropogenic ammonia (NH3) emissions in North America and Europe. Mathematical modeling of NH3 volatilization from each stage of livestock manure management allows comprehensive quantitative estimates of emission sources and nutrient losses. A regionally-specific mass balance model based on total ammoniacal nitrogen (TAN) content in animal manure was developed for estimating NH3 emissions from beef farming operations in western Canada. Total N excretion in urine and feces was estimated from animal diet composition, feed dry matter intake and N utilization for beef cattle categories and production stages. Mineralization of organic N, immobilization of TAN, nitrification, and denitrification of N compounds in manure, were incorporated into the model to account for quantities of TAN at each stage of manure handling. Ammonia emission factors were specified for different animal housing (feedlots, barns), grazing, manure storage (including composting and stockpiling) and land spreading (tilled and untilled land), and were modified for temperature. The model computed NH3 emissions from all beef cattle sub-classes including cows, calves, breeding bulls, steers for slaughter, and heifers for slaughter and replacement. Estimated NH3 emissions were about 1.11 × 105 Mg NH3 in Alberta in 2006, with a mean of 18.5 kg animal-1 yr-1 (15.2 kg NH3-N animal-1 yr-1) which is 23.5% of the annual N intake of beef cattle (64.7 kg animal-1 yr-1). The percentage of N intake volatilized as NH3-N was 50% for steers and heifers for slaughter, and between 11 and 14% for all other categories. Steers and heifers for slaughter were the two largest contributors (3.5 × 104 and 3.9 × 104 Mg, respectively) at 31.5 and 32.7% of total NH3 emissions because most growing animals were finished in feedlots. Animal housing and grazing contributed roughly 63% of the total NH3 emissions (feedlots, barns and pastures contributed 54.4, 0.2 and 8.1% of
Estimation of uranium in bioassay samples of occupational workers by laser fluorimetry
International Nuclear Information System (INIS)
Suja, A.; Prabhu, S.P.; Sawant, P.D.; Sarkar, P.K.; Tiwari, A.K.; Sharma, R.
2010-01-01
A newly established uranium processing facility has been commissioned at BARC, Trombay. Monitoring of occupational workers at regulars intervals is essential to assess intake of uranium by the workers in this facility. The design and engineering safety features of the plant are such that there is very low probability of uranium getting air borne during normal operations. However, the leakages from the system during routine maintenance of the plant may result in intake of uranium by workers. As per the new biokinetic model for uranium, 63% of uranium entering the blood stream gets directly excreted in urine. Therefore, bioassay monitoring (urinalysis) was recommended for these workers. A group of 21 workers was selected for bioassay monitoring to assess the existing urinary excretion levels of uranium before the commencement of actual work. For this purpose, sample collection kit along with an instruction slip was provided to the workers. Bioassay samples received were wet ashed with conc. nitric acid and hydrogen peroxide to break down the metabolized complexes of uranium and it was co-precipitated with calcium phosphate. Separation of uranium from the matrix was done using ion exchange technique and final activity quantification in these samples was done using laser fluorimeter (Quantalase, Model No. NFL/02). Calibration of the laser fluorimeter is done using 10 ppb uranium standard (WHO, France Ref. No. 180000). Verification of the system performance is done by measuring concentration of uranium in the standards (1 ppb to 100 ppb). Standard addition method was followed for estimation of uranium concentration in the samples. Uranyl ions present in the sample get excited by pulsed nitrogen laser at 337.1 nm, and on de-excitation emit fluorescence light (540 nm) intensity which is measured by the PMT. To estimate the uranium in the bioassay samples, a known aliquot of the sample was mixed with 5% sodium pyrophosphate and fluorescence intensity was measured
Forester, James D; Im, Hae Kyung; Rathouz, Paul J
2009-12-01
Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to
Energy Technology Data Exchange (ETDEWEB)
Barros, Haydn [Laboratorio de Fisica Nuclear, Dpto. De Fisica, Universidad Simon Bolivar, Sartenejas, Baruta (Venezuela, Bolivarian Republic of); Marco Parra, Lue-Meru, E-mail: luemerumarco@yahoo.e [Universidad Centroccidental Lisandro Alvarado, Dpto. Quimica y Suelos, Decanato de Agronomia, Tarabana, Cabudare, Edo.Lara (Venezuela, Bolivarian Republic of); Bennun, Leonardo [Universidad de Concepcion, Concepcion (Chile); Greaves, Eduardo D. [Laboratorio de Fisica Nuclear, Dpto. De Fisica, Universidad Simon Bolivar, Sartenejas, Baruta (Venezuela, Bolivarian Republic of)
2010-06-15
The determination of arsenic in water samples requires techniques of high sensitivity. Total Reflection X-Ray Fluorescence (TXRF) allows the determination but a prior separation and pre-concentration procedure is necessary. Alumina is a suitable substrate for the selective separation of the analytes. A method for separation and pre-concentration in alumina, followed by direct analysis of the alumina is evaluated. Quantification was performed using the Al-K{alpha} and Co-K{alpha} lines as internal standard in samples prepared on an alumina matrix, and compared to a calibration with aqueous standards. Artificial water samples of As (III) and As (V) were analyzed after the treatment. Fifty milliliters of the sample at ppb concentration levels were mixed with 10 mg of alumina. The pH, time and temperature were controlled. The alumina was separated from the slurry by centrifugation, washed with de-ionized water and analyzed directly on the sample holder. A pre-concentration factor of 100 was found, with detection limit of 0.7 {mu}gL{sup -1}. The percentage of recovery was 98% for As (III) and 95% for As (V) demonstrating the suitability of the procedure.
International Nuclear Information System (INIS)
Barros, Haydn; Marco Parra, Lue-Meru; Bennun, Leonardo; Greaves, Eduardo D.
2010-01-01
The determination of arsenic in water samples requires techniques of high sensitivity. Total Reflection X-Ray Fluorescence (TXRF) allows the determination but a prior separation and pre-concentration procedure is necessary. Alumina is a suitable substrate for the selective separation of the analytes. A method for separation and pre-concentration in alumina, followed by direct analysis of the alumina is evaluated. Quantification was performed using the Al-Kα and Co-Kα lines as internal standard in samples prepared on an alumina matrix, and compared to a calibration with aqueous standards. Artificial water samples of As (III) and As (V) were analyzed after the treatment. Fifty milliliters of the sample at ppb concentration levels were mixed with 10 mg of alumina. The pH, time and temperature were controlled. The alumina was separated from the slurry by centrifugation, washed with de-ionized water and analyzed directly on the sample holder. A pre-concentration factor of 100 was found, with detection limit of 0.7 μgL -1 . The percentage of recovery was 98% for As (III) and 95% for As (V) demonstrating the suitability of the procedure.
Zhang, Baocheng; Teunissen, Peter J. G.; Yuan, Yunbin; Zhang, Hongxing; Li, Min
2018-04-01
Vertical total electron content (VTEC) parameters estimated using global navigation satellite system (GNSS) data are of great interest for ionosphere sensing. Satellite differential code biases (SDCBs) account for one source of error which, if left uncorrected, can deteriorate performance of positioning, timing and other applications. The customary approach to estimate VTEC along with SDCBs from dual-frequency GNSS data, hereinafter referred to as DF approach, consists of two sequential steps. The first step seeks to retrieve ionospheric observables through the carrier-to-code leveling technique. This observable, related to the slant total electron content (STEC) along the satellite-receiver line-of-sight, is biased also by the SDCBs and the receiver differential code biases (RDCBs). By means of thin-layer ionospheric model, in the second step one is able to isolate the VTEC, the SDCBs and the RDCBs from the ionospheric observables. In this work, we present a single-frequency (SF) approach, enabling the joint estimation of VTEC and SDCBs using low-cost receivers; this approach is also based on two steps and it differs from the DF approach only in the first step, where we turn to the precise point positioning technique to retrieve from the single-frequency GNSS data the ionospheric observables, interpreted as the combination of the STEC, the SDCBs and the biased receiver clocks at the pivot epoch. Our numerical analyses clarify how SF approach performs when being applied to GPS L1 data collected by a single receiver under both calm and disturbed ionospheric conditions. The daily time series of zenith VTEC estimates has an accuracy ranging from a few tenths of a TEC unit (TECU) to approximately 2 TECU. For 73-96% of GPS satellites in view, the daily estimates of SDCBs do not deviate, in absolute value, more than 1 ns from their ground truth values published by the Centre for Orbit Determination in Europe.
Directory of Open Access Journals (Sweden)
Ina C Ansmann
Full Text Available Moreton Bay, Queensland, Australia is an area of high biodiversity and conservation value and home to two sympatric sub-populations of Indo-Pacific bottlenose dolphins (Tursiops aduncus. These dolphins live in close proximity to major urban developments. Successful management requires information regarding their abundance. Here, we estimate total and effective population sizes of bottlenose dolphins in Moreton Bay using photo-identification and genetic data collected during boat-based surveys in 2008-2010. Abundance (N was estimated using open population mark-recapture models based on sighting histories of distinctive individuals. Effective population size (Ne was estimated using the linkage disequilibrium method based on nuclear genetic data at 20 microsatellite markers in skin samples, and corrected for bias caused by overlapping generations (Ne c. A total of 174 sightings of dolphin groups were recorded and 365 different individuals identified. Over the whole of Moreton Bay, a population size N of 554 ± 22.2 (SE (95% CI: 510-598 was estimated. The southern bay sub-population was small at an estimated N = 193 ± 6.4 (SE (95% CI: 181-207, while the North sub-population was more numerous, with 446 ± 56 (SE (95% CI: 336-556 individuals. The small estimated effective population size of the southern sub-population (Ne c = 56, 95% CI: 33-128 raises conservation concerns. A power analysis suggested that to reliably detect small (5% declines in size of this population would require substantial survey effort (>4 years of annual mark-recapture surveys at the precision levels achieved here. To ensure that ecological as well as genetic diversity within this population of bottlenose dolphins is preserved, we consider that North and South sub-populations should be treated as separate management units. Systematic surveys over smaller areas holding locally-adapted sub-populations are suggested as an alternative method for increasing ability to detect
Estimation of uranium isotope in urine samples using extraction chromatography resin
International Nuclear Information System (INIS)
Thakur, Smita S.; Yadav, J.R.; Rao, D.D.
2012-01-01
Internal exposure monitoring for alpha emitting radionuclides is carried out by bioassay samples analysis. For occupational radiation workers handling uranium in reprocessing or fuel fabrication facilities, there exists a possibility of internal exposure and urine assay is the preferred method for monitoring such exposure. Estimation of lower concentration of uranium at mBq level by alpha spectrometry requires preconcentration and its separation from large volume of urine sample. For this purpose, urine samples collected from non radiation workers were spiked with 232 U tracer at mBq level to estimate the chemical yield. Uranium in urine sample was pre-concentrated by calcium phosphate coprecipitation and separated by extraction chromatography resin U/TEVA. In this resin extractant was DAAP (Diamylamylphosphonate) supported on inert Amberlite XAD-7 support material. After co-precipitation, precipitate was centrifuged and dissolved in 10 ml of 1M Al(NO 3 ) 3 prepared in 3M HNO 3 . The sample thus prepared was loaded on extraction chromatography resin, pre-conditioned with 10 ml of 3M HNO 3 . Column was washed with 10 ml of 3M HNO 3 . Column was again rinsed with 5 ml of 9M HCl followed by 20 ml of 0.05 M oxalic acid prepared in 5M HCl to remove interference due to Th and Np if present in the sample. Uranium was eluted from U/TEVA column with 15 ml of 0.01M HCl. The eluted uranium fraction was electrodeposited on stainless steel planchet and counted by alpha spectrometry for 360000 sec. Approximate analysis time involved from sample loading to stripping is 2 hours when compared with the time involved of 3.5 hours by conventional ion exchange method. Seven urine samples from non radiation worker were radio chemically analyzed by this technique and the radiochemical yield was found in the range of 69-91 %. Efficacy of this method against conventional anion exchange technique earlier standardized at this laboratory is also being highlighted. Minimum detectable activity
Directory of Open Access Journals (Sweden)
J.M. Artim
2016-08-01
Full Text Available Characterizing spatio-temporal variation in the density of organisms in a community is a crucial part of ecological study. However, doing so for small, motile, cryptic species presents multiple challenges, especially where multiple life history stages are involved. Gnathiid isopods are ecologically important marine ectoparasites, micropredators that live in substrate for most of their lives, emerging only once during each juvenile stage to feed on fish blood. Many gnathiid species are nocturnal and most have distinct substrate preferences. Studies of gnathiid use of habitat, exploitation of hosts, and population dynamics have used various trap designs to estimate rates of gnathiid emergence, study sensory ecology, and identify host susceptibility. In the studies reported here, we compare and contrast the performance of emergence, fish-baited and light trap designs, outline the key features of these traps, and determine some life cycle parameters derived from trap counts for the Eastern Caribbean coral-reef gnathiid, Gnathia marleyi. We also used counts from large emergence traps and light traps to estimate additional life cycle parameters, emergence rates, and total gnathiid density on substrate, and to calibrate the light trap design to provide estimates of rate of emergence and total gnathiid density in habitat not amenable to emergence trap deployment.
Oxborrow, G. S.; Roark, A. L.; Fields, N. D.; Puleo, J. R.
1974-01-01
Microbiological sampling methods presently used for enumeration of microorganisms on spacecraft surfaces require contact with easily damaged components. Estimation of viable particles on surfaces using air sampling methods in conjunction with a mathematical model would be desirable. Parameters necessary for the mathematical model are the effect of angled surfaces on viable particle collection and the number of viable cells per viable particle. Deposition of viable particles on angled surfaces closely followed a cosine function, and the number of viable cells per viable particle was consistent with a Poisson distribution. Other parameters considered by the mathematical model included deposition rate and fractional removal per unit time. A close nonlinear correlation between volumetric air sampling and airborne fallout on surfaces was established with all fallout data points falling within the 95% confidence limits as determined by the mathematical model.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.
Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe
2015-08-01
The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
International Nuclear Information System (INIS)
Nishina, Kojiro; Oyamatsu, Kazuhiro; Kondo, Shunsuke; Sekimoto, Hiroshi; Ishitani, Kazuki; Yamane, Yoshihiro; Miyoshi, Yoshinori
2000-01-01
This accident occurred when workers were pouring a uranium solution into a precipitation tank with handy operation against the established procedure and both the cylindrical diameter and the total mass exceeded the limited values. As a result, nuclear fission chain reactor in the solution reached not only a 'criticality' state continuing it independently but also an instantly forming criticality state exceed the criticality and increasing further nuclear fission number. The place occurring the accident at this time was not reactor but a place having not to form 'criticality' called by a processing process of uranium fuel. In such place, as because of relating to mechanism of chain reaction, it is required naturally for knowledge on the reactor physics, it is also necessary to understand chemical reaction in chemical process, and functions of tanks, valves and pumps mounted at the processes. For this purpose, some information on uranium concentration ratio, atomic density of nuclides largely affecting to chain reaction such as uranium, hydrogen, and so forth in the solution, shape, inner structure and size of container for the solution, and its temperature and total volume, were necessary for determining criticality volume of the accident uranium solution by using nuclear physics procedures. Here were described on estimation of energy emission in the JCO accident, estimation from analytical results on neutron and solution, calculation of various nuclear physics property estimation on the JCO precipitation tank at JAERI. (G.K.)
Directory of Open Access Journals (Sweden)
Manzoor Khan
2014-01-01
Full Text Available This paper presents new classes of estimators in estimating the finite population mean under double sampling in the presence of nonresponse when using information on fractional raw moments. The expressions for mean square error of the proposed classes of estimators are derived up to the first degree of approximation. It is shown that a proposed class of estimators performs better than the usual mean estimator, ratio type estimators, and Singh and Kumar (2009 estimator. An empirical study is carried out to demonstrate the performance of a proposed class of estimators.
Lee, T. R.; Wood, W. T.; Dale, J.
2017-12-01
Empirical and theoretical models of sub-seafloor organic matter transformation, degradation and methanogenesis require estimates of initial seafloor total organic carbon (TOC). This subsurface methane, under the appropriate geophysical and geochemical conditions may manifest as methane hydrate deposits. Despite the importance of seafloor TOC, actual observations of TOC in the world's oceans are sparse and large regions of the seafloor yet remain unmeasured. To provide an estimate in areas where observations are limited or non-existent, we have implemented interpolation techniques that rely on existing data sets. Recent geospatial analyses have provided accurate accounts of global geophysical and geochemical properties (e.g. crustal heat flow, seafloor biomass, porosity) through machine learning interpolation techniques. These techniques find correlations between the desired quantity (in this case TOC) and other quantities (predictors, e.g. bathymetry, distance from coast, etc.) that are more widely known. Predictions (with uncertainties) of seafloor TOC in regions lacking direct observations are made based on the correlations. Global distribution of seafloor TOC at 1 x 1 arc-degree resolution was estimated from a dataset of seafloor TOC compiled by Seiter et al. [2004] and a non-parametric (i.e. data-driven) machine learning algorithm, specifically k-nearest neighbors (KNN). Built-in predictor selection and a ten-fold validation technique generated statistically optimal estimates of seafloor TOC and uncertainties. In addition, inexperience was estimated. Inexperience is effectively the distance in parameter space to the single nearest neighbor, and it indicates geographic locations where future data collection would most benefit prediction accuracy. These improved geospatial estimates of TOC in data deficient areas will provide new constraints on methane production and subsequent methane hydrate accumulation.
LIU Yun-xia; WEN Yun-jie; HUANG Jin-li; LI Gui-hua; CHAI Xiao; WANG Hong
2015-01-01
The vanadium molybdate yellow colorimetric method(VMYC method) is regarded as one of conventional methods for determining total phosphorus(P) in plants, but it is time consuming procedure. Continuous flow analyzer(CFA) is a fluid stream segmentation technique with air segments. It is used to measure P concentration based on the molybdate-antimony-ascorbic acid method of Murphy and Riley. Sixty nine of maize plant samples were selected and digested with H2SO4-H2O2. P concentrations in the dige...
Jara-Aguirre, Jose C; Smeets, Steven W; Wockenfus, Amy M; Karon, Brad S
2018-05-01
Evaluate the effects of blood gas sample contamination with total parenteral nutrition (TPN)/lipid emulsion and dextrose 50% (D50) solutions on blood gas and electrolyte measurement; and determine whether glucose concentration can predict blood gas sample contamination with TPN/lipid emulsion or D50. Residual lithium heparin arterial blood gas samples were spiked with TPN/lipid emulsion (0 to 15%) and D50 solutions (0 to 2.5%). Blood gas (pH, pCO2, pO2), electrolytes (Na+, K+ ionized calcium) and hemoglobin were measured with a Radiometer ABL90. Glucose concentration was measured in separated plasma by Roche Cobas c501. Chart review of neonatal blood gas results with glucose >300 mg/dL (>16.65 mmol/L) over a seven month period was performed to determine whether repeat (within 4 h) blood gas results suggested pre-analytical errors in blood gas results. Results were used to determine whether a glucose threshold could predict contamination resulting in blood gas and electrolyte results with greater than laboratory-defined allowable error. Samples spiked with 5% or more TPN/lipid emulsion solution or 1% D50 showed glucose concentration >500 mg/dL (>27.75 mmol/L) and produced blood gas (pH, pO 2 , pCO 2 ) results with greater than laboratory-defined allowable error. TPN/lipid emulsion, but not D50, produced greater than allowable error in electrolyte (Na + ,K + ,Ca ++ ,Hb) results at these concentrations. Based on chart review of 144 neonatal blood gas results with glucose >250 mg/dL received over seven months, four of ten neonatal intensive care unit (NICU) patients with glucose results >500 mg/dL and repeat blood gas results within 4 h had results highly suggestive of pre-analytical error. Only 3 of 36 NICU patients with glucose results 300-500 mg/dL and repeat blood gas results within 4 h had clear pre-analytical errors in blood gas results. Glucose concentration can be used as an indicator of significant blood sample contamination with either TPN
International Nuclear Information System (INIS)
Leon, G.C. de; Shiraishi, K.; Kawamura, H.; Igaraishi, Y.; Palattao, M.V.; Azanon, E.M.
1990-10-01
Total diet samples were analyzed for major elements (Na, K, Ca, Mg, P) and some minor trace elements (Fe, Zn, Mn, Al, Sr, Cu, Ba, Yt) using inductively coupled plasma-atomic emission spectrometry (ICP-AES). Samples analyzed were classified into sex and age groups. Results for some elements (Na, K, Mg, Zn, Cu, Mn) were compared with values from Bataan dietary survey calculated using the Philippine composition table. Exceot for Na, analytical results were similar to calculated values. Analytical results for Ca and Fe were also compared with the values from Food and Nutrition Research Institute. In general, values obtained in the study were lower than the FNRI values. Comparison of the analytical and calculated results with the Japanese and ICRP data showed that Philippine values were lower than foreign values. (Auth.). 22 refs., 9 tabs
Porosity estimation by semi-supervised learning with sparsely available labeled samples
Lima, Luiz Alberto; Görnitz, Nico; Varella, Luiz Eduardo; Vellasco, Marley; Müller, Klaus-Robert; Nakajima, Shinichi
2017-09-01
This paper addresses the porosity estimation problem from seismic impedance volumes and porosity samples located in a small group of exploratory wells. Regression methods, trained on the impedance as inputs and the porosity as output labels, generally suffer from extremely expensive (and hence sparsely available) porosity samples. To optimally make use of the valuable porosity data, a semi-supervised machine learning method was proposed, Transductive Conditional Random Field Regression (TCRFR), showing good performance (Görnitz et al., 2017). TCRFR, however, still requires more labeled data than those usually available, which creates a gap when applying the method to the porosity estimation problem in realistic situations. In this paper, we aim to fill this gap by introducing two graph-based preprocessing techniques, which adapt the original TCRFR for extremely weakly supervised scenarios. Our new method outperforms the previous automatic estimation methods on synthetic data and provides a comparable result to the manual labored, time-consuming geostatistics approach on real data, proving its potential as a practical industrial tool.
Variations among animals when estimating the undegradable fraction of fiber in forage samples
Directory of Open Access Journals (Sweden)
Cláudia Batista Sampaio
2014-10-01
Full Text Available The objective of this study was to assess the variability among animals regarding the critical time to estimate the undegradable fraction of fiber (ct using an in situ incubation procedure. Five rumenfistulated Nellore steers were used to estimate the degradation profile of fiber. Animals were fed a standard diet with an 80:20 forage:concentrate ratio. Sugarcane, signal grass hay, corn silage and fresh elephant grass samples were assessed. Samples were put in F57 Ankom® bags and were incubated in the rumens of the animals for 0, 6, 12, 18, 24, 48, 72, 96, 120, 144, 168, 192, 216, 240 and 312 hours. The degradation profiles were interpreted using a mixed non-linear model in which a random effect was associated with the degradation rate. For sugarcane, signal grass hay and corn silage, there were no significant variations among animals regarding the fractional degradation rate of neutral and acid detergent fiber; consequently, the ct required to estimate the undegradable fiber fraction did not vary among animals for those forages. However, a significant variability among animals was found for the fresh elephant grass. The results seem to suggest that the variability among animals regarding the degradation rate of fibrous components can be significant.
International Nuclear Information System (INIS)
Endah Damastuti; Muhayatun; Diah Dwiana L
2009-01-01
Beside to complished the requirements of international standard of ISO/IEC 17025:2005, uncertainty estimation should be done to increase quality and confidence of analysis results and also to establish traceability of the analysis results to SI unit. Neutron activation analysis is a major technique used by Radiometry technique analysis laboratory and is included as scope of accreditation under ISO/IEC 17025:2005, therefore uncertainty estimation of neutron activation analysis is needed to be carried out. Sample and standard preparation as well as, irradiation and measurement using gamma spectrometry were the main activities which could give contribution to uncertainty. The components of uncertainty sources were specifically explained. The result of expanded uncertainty was 4,0 mg/kg with level of confidence 95% (coverage factor=2) and Zn concentration was 25,1 mg/kg. Counting statistic of cuplikan and standard were the major contribution of combined uncertainty. The uncertainty estimation was expected to increase the quality of the analysis results and could be applied further to other kind of samples. (author)
PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS
Energy Technology Data Exchange (ETDEWEB)
He, Shiyuan; Huang, Jianhua Z.; Long, James [Department of Statistics, Texas A and M University, College Station, TX (United States); Yuan, Wenlong; Macri, Lucas M., E-mail: lmacri@tamu.edu [George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A and M University, College Station, TX (United States)
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.
Madi, Mahmoud K; Karameh, Fadi N
2017-01-01
Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate
2017-01-01
Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate
Directory of Open Access Journals (Sweden)
Gabriele Doblhammer
2011-02-01
Full Text Available This paper introduces a set of methods for estimating fertility indicators in the absence of recent and short-term birth statistics. For Germany, we propose a set of straightforward methods that allow for the computation of monthly and yearly total fertility rates (mTFR on the basis of preliminary monthly data, including a confidence interval. The method for estimating most current fertility rates can be applied when no information on the age structure and the number of women exposed to childbearing is available. The methods introduced in this study are useful for calculating monthly birth indicators, with minimal requirements for data quality and statistical effort. In addition, we suggest an approach for projecting the yearly TFR based on preliminary monthly information up to June.
International Nuclear Information System (INIS)
Hoefler, H.; Streli, C.; Wobrauschek, P.; Ovari, M.; Zaray, Gy.
2006-01-01
Recently there is a growing interest in low Z elements such as carbon, oxygen up to sulphur and phosphorus in biological specimen. Total reflection X-ray fluorescence (TXRF) spectrometry is a suitable technique demanding only very small amounts of sample. On the other side, the detection of low Z elements is a critical point of this analytical technique. Besides other effects, self absorption may occur in the samples, because of the low energy of the fluorescence radiation. The calibration curves might be not linear any longer. To investigate this issue water samples and samples from human cerebrospinal fluid were used to examine absorption effects. The linearity of calibration curves in dependence of sample mass was investigated to verify the validity of the thin film approximation. The special requirements to the experimental setup for low Z energy dispersive fluorescence analysis were met by using the Atominstitute's TXRF vacuum chamber. This spectrometer is equipped with a Cr-anode X-ray tube, a multilayer monochromator and a SiLi detector with 30 mm 2 active area and with an ultrathin entrance window. Other object on this study are biofilms, living on all subaqueous surfaces, consisting of bacteria, algae and fungi embedded in their extracellular polymeric substances (EPS). Many trace elements from the water are bound in the biofilm. Thus, the biofilm is a useful indicator for polluting elements. For biomonitoring purposes not only the polluting elements but also the formation and growth rate of the biofilm are important. Biofilms were directly grown on TXRF reflectors. Their major elements and C-masses correlated to the cultivation time were investigated. These measured masses were related to the area seen by the detector, which was experimentally determined. Homogeneity of the biofilms was checked by measuring various sample positions on the reflectors
Energy Technology Data Exchange (ETDEWEB)
Hoefler, H. [Atominstitut of the Austrian Universities, TU-Wien, A-1020 Vienna (Austria); Streli, C. [Atominstitut of the Austrian Universities, TU-Wien, A-1020 Vienna (Austria)]. E-mail: streli@ati.ac.at; Wobrauschek, P. [Atominstitut of the Austrian Universities, TU-Wien, A-1020 Vienna (Austria); Ovari, M. [Eoetvoes University, Institute of Chemistry, H-1117, Budapest, Pazmany P. stny 1/a. (Hungary); Zaray, Gy. [Eoetvoes University, Institute of Chemistry, H-1117, Budapest, Pazmany P. stny 1/a. (Hungary)
2006-11-15
Recently there is a growing interest in low Z elements such as carbon, oxygen up to sulphur and phosphorus in biological specimen. Total reflection X-ray fluorescence (TXRF) spectrometry is a suitable technique demanding only very small amounts of sample. On the other side, the detection of low Z elements is a critical point of this analytical technique. Besides other effects, self absorption may occur in the samples, because of the low energy of the fluorescence radiation. The calibration curves might be not linear any longer. To investigate this issue water samples and samples from human cerebrospinal fluid were used to examine absorption effects. The linearity of calibration curves in dependence of sample mass was investigated to verify the validity of the thin film approximation. The special requirements to the experimental setup for low Z energy dispersive fluorescence analysis were met by using the Atominstitute's TXRF vacuum chamber. This spectrometer is equipped with a Cr-anode X-ray tube, a multilayer monochromator and a SiLi detector with 30 mm{sup 2} active area and with an ultrathin entrance window. Other object on this study are biofilms, living on all subaqueous surfaces, consisting of bacteria, algae and fungi embedded in their extracellular polymeric substances (EPS). Many trace elements from the water are bound in the biofilm. Thus, the biofilm is a useful indicator for polluting elements. For biomonitoring purposes not only the polluting elements but also the formation and growth rate of the biofilm are important. Biofilms were directly grown on TXRF reflectors. Their major elements and C-masses correlated to the cultivation time were investigated. These measured masses were related to the area seen by the detector, which was experimentally determined. Homogeneity of the biofilms was checked by measuring various sample positions on the reflectors.
Hilliard, Mark; Alley, William R; McManus, Ciara A; Yu, Ying Qing; Hallinan, Sinead; Gebler, John; Rudd, Pauline M
Glycosylation is an important attribute of biopharmaceutical products to monitor from development through production. However, glycosylation analysis has traditionally been a time-consuming process with long sample preparation protocols and manual interpretation of the data. To address the challenges associated with glycan analysis, we developed a streamlined analytical solution that covers the entire process from sample preparation to data analysis. In this communication, we describe the complete analytical solution that begins with a simplified and fast N-linked glycan sample preparation protocol that can be completed in less than 1 hr. The sample preparation includes labelling with RapiFluor-MS tag to improve both fluorescence (FLR) and mass spectral (MS) sensitivities. Following HILIC-UPLC/FLR/MS analyses, the data are processed and a library search based on glucose units has been included to expedite the task of structural assignment. We then applied this total analytical solution to characterize the glycosylation of the NIST Reference Material mAb 8761. For this glycoprotein, we confidently identified 35 N-linked glycans and all three major classes, high mannose, complex, and hybrid, were present. The majority of the glycans were neutral and fucosylated; glycans featuring N-glycolylneuraminic acid and those with two galactoses connected via an α1,3-linkage were also identified.
International Nuclear Information System (INIS)
Bou-Rabee, F.
1995-01-01
The concentration of uranium in Kuwait soil samples as well as in solid fall-out and surface air-suspended matter samples has been assayed by inductively coupled plasma mass spectrometry (ICP-MS). It was found that average U concentration in the soil samples (∼ 0.7 μg/g) is half of that in solid fall-out and air particulate matter samples. The average U concentration in the latter samples in the summer season was 2 μg g -1 and decreased to 1 μg g -1 during the winter of 1993/94. The higher concentration in the solid fall-out and air samples cannot be explained by fall-out from the oil fired power station as the U average concentration of the escaping fly ashes from the station was only 0.22 μg g -1 . The uranium concentration in the tap water was a very low 0.02 μg L -1 . The total per capita annual intake of uranium via inhalation by Kuwait inhabitants was appraised to be ''approx =''0.05 Bq, which is <0.2% of the recommended annual limit on intake for members of the general population. (author)
A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies
Directory of Open Access Journals (Sweden)
Hojin Moon
2002-12-01
Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.
Effects of sample size on estimation of rainfall extremes at high temperatures
Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik
2017-09-01
High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
Effects of sample size on estimation of rainfall extremes at high temperatures
Directory of Open Access Journals (Sweden)
B. Boessenkool
2017-09-01
Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
Investigation of Bicycle Travel Time Estimation Using Bluetooth Sensors for Low Sampling Rates
Directory of Open Access Journals (Sweden)
Zhenyu Mei
2014-10-01
Full Text Available Filtering the data for bicycle travel time using Bluetooth sensors is crucial to the estimation of link travel times on a corridor. The current paper describes an adaptive filtering algorithm for estimating bicycle travel times using Bluetooth data, with consideration of low sampling rates. The data for bicycle travel time using Bluetooth sensors has two characteristics. First, the bicycle flow contains stable and unstable conditions. Second, the collected data have low sampling rates (less than 1%. To avoid erroneous inference, filters are introduced to “purify” multiple time series. The valid data are identified within a dynamically varying validity window with the use of a robust data-filtering procedure. The size of the validity window varies based on the number of preceding sampling intervals without a Bluetooth record. Applications of the proposed algorithm to the dataset from Genshan East Road and Moganshan Road in Hangzhou demonstrate its ability to track typical variations in bicycle travel time efficiently, while suppressing high frequency noise signals.
Nguyen, Thanh-Nghia; Trocio, Jeffrey; Kowal, Stacey; Ferrufino, Cheryl P; Munakata, Julie; South, Dell
2016-12-01
Health management is becoming increasingly complex, given a range of care options and the need to balance costs and quality. The ability to measure and understand drivers of costs is critical for healthcare organizations to effectively manage their patient populations. Healthcare decision makers can leverage real-world evidence to explore the value of disease-management interventions in shifting total cost trends. To develop a real-world, evidence-based estimator that examines the impact of disease-management interventions on the total cost of care (TCoC) for a patient population with nonvalvular atrial fibrillation (NVAF). Data were collected from a patient-level real-world evidence data set that uses the IMS PharMetrics Health Plan Claims Database. Pharmacy and medical claims for patients meeting the inclusion or exclusion criteria were combined in longitudinal cohorts with a 180-day preindex and 360-day follow-up period. Descriptive statistics, such as mean and median patient costs and event rates, were derived from a real-world evidence analysis and were used to populate the base-case estimates within the TCoC estimator, an exploratory economic model that was designed to estimate the potential impact of several disease-management activities on the TCoC for a patient population with NVAF. Using Microsoft Excel, the estimator is designed to compare current direct costs of medical care to projected costs by varying assumptions on the impact of disease-management activities and applying the associated changes in cost trends to the affected populations. Disease-management levers are derived from literature-based concepts affecting costs along the NVAF disease continuum. The use of the estimator supports analyses across 4 US geographic regions, age, cost types, and care settings during 1 year. All patients included in the study were continuously enrolled in their health plan (within the IMS PharMetrics Health Plan Claims Database) between July 1, 2010, and June 30
Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Yamada, Hiroshi; Miyake, Hirotsugu; Furukawa, Toshiaki A; Furukaw, Toshiaki A
2016-01-01
In a previous study, we reported that the distribution of total depressive symptoms scores according to the Center for Epidemiologic Studies Depression Scale (CES-D) in a general population is stable throughout middle adulthood and follows an exponential pattern except for at the lowest end of the symptom score. Furthermore, the individual distributions of 16 negative symptom items of the CES-D exhibit a common mathematical pattern. To confirm the reproducibility of these findings, we investigated the distribution of total depressive symptoms scores and 16 negative symptom items in a sample of Japanese employees. We analyzed 7624 employees aged 20-59 years who had participated in the Northern Japan Occupational Health Promotion Centers Collaboration Study for Mental Health. Depressive symptoms were assessed using the CES-D. The CES-D contains 20 items, each of which is scored in four grades: "rarely," "some," "much," and "most of the time." The descriptive statistics and frequency curves of the distributions were then compared according to age group. The distribution of total depressive symptoms scores appeared to be stable from 30-59 years. The right tail of the distribution for ages 30-59 years exhibited a linear pattern with a log-normal scale. The distributions of the 16 individual negative symptom items of the CES-D exhibited a common mathematical pattern which displayed different distributions with a boundary at "some." The distributions of the 16 negative symptom items from "some" to "most" followed a linear pattern with a log-normal scale. The distributions of the total depressive symptoms scores and individual negative symptom items in a Japanese occupational setting show the same patterns as those observed in a general population. These results show that the specific mathematical patterns of the distributions of total depressive symptoms scores and individual negative symptom items can be reproduced in an occupational population.
Herbei, Radu; Kubatko, Laura
2013-03-26
Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.
Directory of Open Access Journals (Sweden)
Margaret E Hunter
Full Text Available Environmental DNA (eDNA methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR for the Burmese python (Python molurus bivittatus, Northern African python (P. sebae, boa constrictor (Boa constrictor, and the green (Eunectes murinus and yellow anaconda (E. notaeus. Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive
Hunter, Margaret E; Oyler-McCance, Sara J; Dorazio, Robert M; Fike, Jennifer A; Smith, Brian J; Hunter, Charles T; Reed, Robert N; Hart, Kristen M
2015-01-01
Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors
Directory of Open Access Journals (Sweden)
LIU Yun-xia
2015-12-01
Full Text Available The vanadium molybdate yellow colorimetric method(VMYC method is regarded as one of conventional methods for determining total phosphorus(P in plants, but it is time consuming procedure. Continuous flow analyzer(CFA is a fluid stream segmentation technique with air segments. It is used to measure P concentration based on the molybdate-antimony-ascorbic acid method of Murphy and Riley. Sixty nine of maize plant samples were selected and digested with H2SO4-H2O2. P concentrations in the digests were determined by CFA and VMYC method, respectively. The t test found that there was no any significant difference of the plant P contents measured by the CFA and the VMYC method. A linear equation could best describe their relationship: Y(CFA-P=0.927X(VMYC-P-0.002. The Pearson's correlation coefficient was 0.985 with a significance level(n=69, P<0.01. The CFA method for plant P measurement had a high precision with relative standard deviation(RSD less than 1.5%. It is suggested that the CFA based on Murphy and Riley colorimetric detection can be used to determinate total plant P in the digests solutions with H2SO4-H2O2. The CFA method is labor saving and can handle large numbers of samples. The human error in mixing with other operations is reduced to a great extent.
A method of estimating hydrogen in solid and liquid samples by means of neutron thermalisation
International Nuclear Information System (INIS)
Carter, D.H.; Sanders, J.E.
1967-06-01
The count-rate of a cadmium-covered Pu239 fission chamber placed in a reactor neutron flux increases when a hydrogen-containing material is inserted due to the thermalisation of epicadmium neutrons. This effect forms the basis of a non-destructive method of estimating hydrogen in solid or liquid samples, and trial experiments to demonstrate the principles have been made. The sensitivity is such that hydrogen down to 10 p.p.m. in a typical metal should be detected. A useful feature of the method is its very low response to elements other than hydrogen. (author)
Indirect estimation of signal-dependent noise with nonadaptive heterogeneous samples.
Azzari, Lucio; Foi, Alessandro
2014-08-01
We consider the estimation of signal-dependent noise from a single image. Unlike conventional algorithms that build a scatterplot of local mean-variance pairs from either small or adaptively selected homogeneous data samples, our proposed approach relies on arbitrarily large patches of heterogeneous data extracted at random from the image. We demonstrate the feasibility of our approach through an extensive theoretical analysis based on mixture of Gaussian distributions. A prototype algorithm is also developed in order to validate the approach on simulated data as well as on real camera raw images.
International Nuclear Information System (INIS)
Mangala, M.J.; Korir, K.A.; Maina, D.M.; Kinyua, A.M.
2000-01-01
Results of trace element analysis by TXRF of tap water and various brands of bottled mineral water samples which are representative of local and imported brands sold in Nairobi are reported. The variation in elemental concentrations in water samples analyzed were as follows: potassium (K) 0.2 to 28.9 μg/ml; calcium (Ca) 2.2 to 120 μg/ml; titanium (Ti) 11 to 60 μg/l; manganese (Mn) 8 to 670 μg/l; iron (Fe) 31 to 540 μg/l; copper (Cu) 8 to 30 μg/l; zinc (Zn) 8 to 4730 μg/l; bromine (Br) 9 to 248 μg/l; rubidium (Rb) 10 to 40 μg/l and strontium (Sr) 10 to 1000 μg/l. Local mineral water samples contain higher levels of trace elements; manganese (Mn), zinc (Zn), bromine (Br), rubidium (Rb) and strontium (Sr) as compared to the imported brands. Principal component analysis of the results revealed three component loading factors clusters for: rubidium (Rb), strontium (Sr) and calcium (Ca); titanium (Ti), iron (Fe), bromine (Br), and zinc (Zn); zinc (Zn), manganese (Mn) and potassium (K) respectively. The percentage of total variance explained by the components was 31.4, 27.3, and 14.8 respectively. In this study, we also found that a limited spread of 5-6 mm for a 10 μl sample was achieved when the quartz sample carrier was dried in a low pressure (300 mbar) oven at 70 o C for 10 hours. (author)
Directory of Open Access Journals (Sweden)
S. Inguaggiato
2005-06-01
Full Text Available A fast and completely automated procedure is proposed for the preparation and determination of d13C of total inorganic carbon dissolved in water ( d13CTDIC. This method is based on the acidification of water samples transforming the whole dissolved inorganic carbon species into CO2. Water samples are directly injected by syringe into 5.9 ml vials with screw caps which have a pierciable rubber septum. An Analytical Precision «Carbonate Prep System» was used both to flush pure helium into the vials and to automatically dispense a fixed amount of H3PO4. Full-equilibrium conditions between produced CO2 and water are reached at a temperature of 70°C (± 0.1°C in less than 24 h. Carbon isotope ratios (13C/ 12C were measured on an AP 2003 continuous flow mass spectrometer, connected on-line with the injection system. The precision and reproducibility of the proposed method was tested both on aqueous standard solutions prepared using Na2CO3 with d13C=-10.78 per mil versus PDB (1 s= 0.08, n = 11, and at five different concentrations (2, 3, 4, 5 and 20 mmol/l and on more than thirty natural samples. Mean d13CTDIC on standard solution samples is ?10.89 < per mil versus PDB (1 s= 0.18, n = 50, thus revealing both a good analytical precision and reproducibility. A comparison between average d13CTDIC values on a quadruplicate set of natural samples and those obtained following the chemical and physical stripping method highlights a good agreement between the two analytical methods.
Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.
Energy Technology Data Exchange (ETDEWEB)
Matulef, Kevin Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-02-01
The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewer resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.
International Nuclear Information System (INIS)
Wright, T.
1983-01-01
Consider a stratified population with L strata, so that a Poisson random variable is associated with each stratum. The parameter associated with the hth stratum is theta/sub h/, h = 1, 2, ..., L. Let ω/sub h/ be the known proportion of the population in the hth stratum, h = 1, 2, ..., L. The authors want to estimate the parameter theta = summation from h = 1 to L ω/sub h/theta/sub h/. We assume that prior information is available on theta/sub h/ and that it can be expressed in terms of a gamma distribution with parameters α/sub h/ and β/sub h/, h = 1, 2, ..., L. We also assume that the prior distributions are independent. Using squared error loss function, a Bayes allocation of total sample size with a cost constraint is given. The Bayes estimate using the Bayes allocation is shown to have an adjusted mean square error which is strictly less than the adjusted mean square error of the classical estimate using the classical allocation
International Nuclear Information System (INIS)
Amberger, Martin A.; Hoeltig, Michael; Broekaert, Jose A.C.
2010-01-01
The use of slurry sampling total reflection X-ray fluorescence spectrometry (SlS-TXRF) for the direct determination of Ca, Cr, Cu, Fe, Mn and Ti in four boron nitride powders has been described. Measurements of the zeta potential showed that slurries with good stabilities can be obtained by the addition of polyethylenimine (PEI) at a concentration of 0.1 wt.% and by adjusting the pH at 4. For the optimization of the concentration of boron nitride in the slurries the net line intensities and the signal to background ratios were determined for the trace elements Ca and Ti as well as for the internal standard element Ga in the case of concentrations of boron nitride ranging from 1 to 30 mg mL -1 . As a compromise with respect to high net line intensities and high signal to background ratios, concentrations of 5 mg mL -1 of boron nitride were found suitable and were used for all further measurements. The limits of detection of SlS-TXRF for the boron nitride powders were found to range from 0.062 to 1.6 μg g -1 for Cu and Ca, respectively. Herewith, they are higher than those obtained in solid sampling and slurry sampling graphite furnace atomic absorption spectrometry (SoS-GFAAS, SlS-GFAAS) as well as those of solid sampling electrothermal evaporation inductively coupled plasma optical emission spectrometry (SoS-ETV-ICP-OES). For Ca and Fe as well as for Cu and Fe, however, they were found to be lower than for GFAAS and for ICP-OES subsequent to wet chemical digestion, respectively. The universal applicability of SlS-TXRF to the analysis of samples with a wide variety of matrices could be demonstrated by the analysis of certified reference materials such as SiC, Al 2 O 3 , powdered bovine liver and borate ore with a single calibration. The correlation coefficients of the plots for the values found for Ca, Fe and Ti by SlS-TXRF in the boron nitride powders as well as in the before mentioned samples versus the reference values for the respective samples over a
Energy Technology Data Exchange (ETDEWEB)
Amberger, Martin A.; Hoeltig, Michael [University of Hamburg, Institute for Inorganic and Applied Chemistry, Martin-Luther-King-Platz 6, D-20146 Hamburg (Germany); Broekaert, Jose A.C., E-mail: jose.broekaert@chemie.uni-hamburg.d [University of Hamburg, Institute for Inorganic and Applied Chemistry, Martin-Luther-King-Platz 6, D-20146 Hamburg (Germany)
2010-02-15
The use of slurry sampling total reflection X-ray fluorescence spectrometry (SlS-TXRF) for the direct determination of Ca, Cr, Cu, Fe, Mn and Ti in four boron nitride powders has been described. Measurements of the zeta potential showed that slurries with good stabilities can be obtained by the addition of polyethylenimine (PEI) at a concentration of 0.1 wt.% and by adjusting the pH at 4. For the optimization of the concentration of boron nitride in the slurries the net line intensities and the signal to background ratios were determined for the trace elements Ca and Ti as well as for the internal standard element Ga in the case of concentrations of boron nitride ranging from 1 to 30 mg mL{sup -1}. As a compromise with respect to high net line intensities and high signal to background ratios, concentrations of 5 mg mL{sup -1} of boron nitride were found suitable and were used for all further measurements. The limits of detection of SlS-TXRF for the boron nitride powders were found to range from 0.062 to 1.6 mug g{sup -1} for Cu and Ca, respectively. Herewith, they are higher than those obtained in solid sampling and slurry sampling graphite furnace atomic absorption spectrometry (SoS-GFAAS, SlS-GFAAS) as well as those of solid sampling electrothermal evaporation inductively coupled plasma optical emission spectrometry (SoS-ETV-ICP-OES). For Ca and Fe as well as for Cu and Fe, however, they were found to be lower than for GFAAS and for ICP-OES subsequent to wet chemical digestion, respectively. The universal applicability of SlS-TXRF to the analysis of samples with a wide variety of matrices could be demonstrated by the analysis of certified reference materials such as SiC, Al{sub 2}O{sub 3}, powdered bovine liver and borate ore with a single calibration. The correlation coefficients of the plots for the values found for Ca, Fe and Ti by SlS-TXRF in the boron nitride powders as well as in the before mentioned samples versus the reference values for the respective
Estimation of uranium in bioassay samples of occupational workers by laser fluorimetry
International Nuclear Information System (INIS)
Suja, A.; Prabhu, S.P.; Sawant, P.D.; Sarkar, P.K.; Tiwari, A.K.; Sharma, R.
2012-01-01
A newly established uranium processing facility has been commissioned at BARC, Trombay. Monitoring of occupational workers is essential to assess intake of uranium in this facility. A group of 21 workers was selected for bioassay monitoring to assess the existing urinary excretion levels of uranium before the commencement of actual work. Bioassay samples collected from these workers were analyzed by ion-exchange technique followed by laser fluorimetry. Standard addition method was followed for estimation of uranium concentration in the samples. The minimum detectable activity by this technique is about 0.2 ng. The range of uranium observed in these samples varies from 19 to 132 ng/L. Few of these samples were also analyzed by fission track analysis technique and the results were found to be comparable to those obtained by laser fluorimetry. The urinary excretion rate observed for the individual can be regarded as a 'personal baseline' and will be treated as the existing level of uranium in urine for these workers at the facility. (author)
International Nuclear Information System (INIS)
Haven, Kyle; Majda, Andrew; Abramov, Rafail
2005-01-01
Many situations in complex systems require quantitative estimates of the lack of information in one probability distribution relative to another. In short term climate and weather prediction, examples of these issues might involve the lack of information in the historical climate record compared with an ensemble prediction, or the lack of information in a particular Gaussian ensemble prediction strategy involving the first and second moments compared with the non-Gaussian ensemble itself. The relative entropy is a natural way to quantify the predictive utility in this information, and recently a systematic computationally feasible hierarchical framework has been developed. In practical systems with many degrees of freedom, computational overhead limits ensemble predictions to relatively small sample sizes. Here the notion of predictive utility, in a relative entropy framework, is extended to small random samples by the definition of a sample utility, a measure of the unlikeliness that a random sample was pr