WorldWideScience

Sample records for pulsed-column holdup estimators

  1. Consolidated Fuel Reprocessing Program. Operating experience with pulsed-column holdup estimators

    International Nuclear Information System (INIS)

    Ehinger, M.H.

    1986-01-01

    Methods for estimating pulsed-column holdup are being investigated as part of the Safeguards Assessment task of the Consolidated Fuel Reprocessing Program (CFRP) at the Oak Ridge National Laboratory. The CFRP was a major sponsor of test runs at the Barnwell Nuclear Fuel plant (BNFP) in 1980 and 1981. During these tests, considerable measurement data were collected for pulsed columns in the plutonium purification portion of the plant. These data have been used to evaluate and compare three available methods of holdup estimation

  2. Experimental validation of pulsed column inventory estimators

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.; Weh, R.; Eiben, K.; Dander, T.; Hakkila, E.A.

    1991-01-01

    Near-real-time accounting (NRTA) for reprocessing plants relies on the timely measurement of all transfers through the process area and all inventory in the process. It is difficult to measure the inventory of the solvent contractors; therefore, estimation techniques are considered. We have used experimental data obtained at the TEKO facility in Karlsruhe and have applied computer codes developed at Clemson University to analyze this data. For uranium extraction, the computer predictions agree to within 15% of the measured inventories. We believe this study is significant in demonstrating that using theoretical models with a minimum amount of process data may be an acceptable approach to column inventory estimation for NRTA. 15 refs., 7 figs

  3. Statistical estimation of process holdup

    International Nuclear Information System (INIS)

    Harris, S.P.

    1988-01-01

    Estimates of potential process holdup and their random and systematic error variances are derived to improve the inventory difference (ID) estimate and its associated measure of uncertainty for a new process at the Savannah River Plant. Since the process is in a start-up phase, data have not yet accumulated for statistical modelling. The material produced in the facility will be a very pure, highly enriched 235U with very small isotopic variability. Therefore, data published in LANL's unclassified report on Estimation Methods for Process Holdup of a Special Nuclear Materials was used as a starting point for the modelling process. LANL's data were gathered through a series of designed measurements of special nuclear material (SNM) holdup at two of their materials-processing facilities. Also, they had taken steps to improve the quality of data through controlled, larger scale, experiments outside of LANL at highly enriched uranium processing facilities. The data they have accumulated are on an equipment component basis. Our modelling has been restricted to the wet chemistry area. We have developed predictive models for each of our process components based on the LANL data. 43 figs

  4. Estimation methods for special nuclear materials holdup

    International Nuclear Information System (INIS)

    Pillay, K.K.S.; Picard, R.R.

    1984-01-01

    The potential value of statistical models for the estimation of residual inventories of special nuclear materials was examined using holdup data from processing facilities and through controlled experiments. Although the measurement of hidden inventories of special nuclear materials in large facilities is a challenging task, reliable estimates of these inventories can be developed through a combination of good measurements and the use of statistical models. 7 references, 5 figures

  5. Estimation methods for process holdup of special nuclear materials

    International Nuclear Information System (INIS)

    Pillay, K.K.S.; Picard, R.R.; Marshall, R.S.

    1984-06-01

    The US Nuclear Regulatory Commission sponsored a research study at the Los Alamos National Laboratory to explore the possibilities of developing statistical estimation methods for materials holdup at highly enriched uranium (HEU)-processing facilities. Attempts at using historical holdup data from processing facilities and selected holdup measurements at two operating facilities confirmed the need for high-quality data and reasonable control over process parameters in developing statistical models for holdup estimations. A major effort was therefore directed at conducting large-scale experiments to demonstrate the value of statistical estimation models from experimentally measured data of good quality. Using data from these experiments, we developed statistical models to estimate residual inventories of uranium in large process equipment and facilities. Some of the important findings of this investigation are the following: prediction models for the residual holdup of special nuclear material (SNM) can be developed from good-quality historical data on holdup; holdup data from several of the equipment used at HEU-processing facilities, such as air filters, ductwork, calciners, dissolvers, pumps, pipes, and pipe fittings, readily lend themselves to statistical modeling of holdup; holdup profiles of process equipment such as glove boxes, precipitators, and rotary drum filters can change with time; therefore, good estimation of residual inventories in these types of equipment requires several measurements at the time of inventory; although measurement of residual holdup of SNM in large facilities is a challenging task, reasonable estimates of the hidden inventories of holdup to meet the regulatory requirements can be accomplished through a combination of good measurements and the use of statistical models. 44 references, 62 figures, 43 tables

  6. Tracer techniques in estimating nuclear materials holdup

    International Nuclear Information System (INIS)

    Pillay, K.K.S.

    1987-01-01

    Residual inventory of nuclear materials remaining in processing facilities (holdup) is recognized as an insidious problem for safety of plant operations and safeguarding of special nuclear materials (SNM). This paper reports on an experimental study where a well-known method of radioanalytical chemistry, namely tracer technique, was successfully used to improve nondestructive measurements of holdup of nuclear materials in a variety of plant equipment. Such controlled measurements can improve the sensitivity of measurements of residual inventories of nuclear materials in process equipment by several orders of magnitude and the good quality data obtained lend themselves to developing mathematical models of holdup of SNM during stable plant operations

  7. In-process inventory estimation for pulsed columns and mixer-settlers

    Energy Technology Data Exchange (ETDEWEB)

    Cobb, D.D.; Burkhart, L.E.; Beyerlein, A.L.

    1980-01-01

    Nuclear materials accounting and control in fuels reprocessing plants can be improved by near-real-time estimation of the nuclear materials inventory in solvent-extraction contactors. Techniques are being developed for the estimation of the in-process inventory in contactors. These techniques are derived from recent developments in chemical modeling of contactor systems, on-line measurements for materials accounting and control of the Purex process, and computer-based data acquisition and analysis methods.

  8. In-process inventory estimation for pulsed columns and mixer-settlers

    International Nuclear Information System (INIS)

    Cobb, D.D.; Burkhart, L.E.; Beyerlein, A.L.

    1980-01-01

    Nuclear materials accounting and control in fuels reprocessing plants can be improved by near-real-time estimation of the nuclear materials inventory in solvent-extraction contactors. Techniques are being developed for the estimation of the in-process inventory in contactors. These techniques are derived from recent developments in chemical modeling of contactor systems, on-line measurements for materials accounting and control of the Purex process, and computer-based data acquisition and analysis methods

  9. Calculation code PULCO for Purex process in pulsed column

    International Nuclear Information System (INIS)

    Gonda, Kozo; Matsuda, Teruo

    1982-03-01

    The calculation code PULCO, which can simulate the Purex process using a pulsed column as an extractor, has been developed. The PULCO is based on the fundamental concept of mass transfer that the mass transfer within a pulsed column occurs through the interface of liquid drops and continuous phase fluid, and is the calculation code different from conventional ones, by which various phenomena such as the generation of liquid drops, their rising and falling, and the unification of liquid drops actually occurring in a pulsed column are exactly reflected and can be correctly simulated. In the PULCO, the actually measured values of the fundamental quantities representing the extraction behavior of liquid drops in a pulsed column are incorporated, such as the mass transfer coefficient of each component, the diameter and velocity of liquid drops in a pulsed column, the holdup of dispersed phase, and axial turbulent flow diffusion coefficient. The verification of the results calculated with the PULCO was carried out by installing a pulsed column of 50 mm inside diameter and 2 m length with 40 plate stage in a glove box for unirradiated uranium-plutonium mixed system. The results of the calculation and test were in good agreement, and the validity of the PULCO was confirmed. (Kako, I.)

  10. Operation of the annular pulsed column, (2)

    International Nuclear Information System (INIS)

    Takahashi, Keiki; Tsukada, Takeshi

    1988-01-01

    The heat of reaction generated form the uranium extraction is considered to from the temperature profile inside the pulsed column. A simulation code was developed to estimate the temperature profile, considering heat generation and counter-current heat transfer. The temperature profiles calculated using this code was found to depend on both the position of the extraction zone and the operating condition. The reported experimental result was fairly represented by this simulation code. We consider that this presented simulation code is capable of providing with the temperature profile in the pulsed column and useful for the monitoring of the uranium extraction zone. (author)

  11. Effect of backmixing on pulse column performance

    International Nuclear Information System (INIS)

    Miao, Y.W.

    1979-05-01

    A critical survey of the published literature concerning dispersed phase holdup and longitudinal mixing in pulsed sieve-plate extraction columns has been made to assess the present state-of-the-art in predicting these two parameters, both of which are of critical importance in the development of an accurate mathematical model of the pulse column. Although there are many conflicting correlations of these variables as a function of column geometry, operating conditions, and physical properties of the liquid systems involved it has been possible to develop new correlations which appear to be useful and which are consistent with much of the available data over the limited range of variables most likely to be encountered in plant sized equipment. The correlations developed were used in a stagewise model of the pulse column to predict product concentrations, solute inventory, and concentration profiles in a column for which limited experimental data were available. Reasonable agreement was obtained between the mathematical model and the experimental data. Complete agreement, however, can only be obtained after a correlation for the extraction efficiency has been developed. The correlation of extraction efficiency was beyond the scope of this work

  12. NUMATH: a nuclear-material-holdup estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    Krichinsky, A.M.

    1981-01-01

    A computer program, NUMATH (Nuclear Material Holdup Estimator), has been developed to permit inventory estimation in vessels involved in unit operations and chemical processes. This program has been implemented in an operating nuclear fuel processing plant. NUMATH's purpose is to provide steady-state composition estimates for material residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimation, the results are determined for and cataloged in container-oriented files. The estimated compositions represent material collected in applicable vessels-including consideration for material previously acknowledged in these vessels. The program utilizes process measurements and simple material balance models to estimate material holdups and distribution within unit operations. During simulated run testing, NUMATH-estimated inventories typically produced material balances within 7% of the associated measured material balances for uranium and within 16% of the associated, measured material balance for thorium during steady-state process operation

  13. PREMATH: a Precious-Material Holdup Estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    Krichinsky, A.M.; Bruns, D.D.

    1982-01-01

    A computer program, PREMATH (Precious Material Holdup Estimator), has been developed to permit inventory estimation in vessels involved in unit operations and chemical processes. This program has been implemented in an operating nuclear fuel processing plant. PREMATH's purpose is to provide steady-state composition estimates for material residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimation, the results are determined for and cataloged in container-oriented files. The estimated compositions represent material collected in applicable vessels - including consideration for material previously acknowledged in these vessels. The program utilizes process measurements and simple material balance models to estimate material holdups and distribution within unit operations. During simulated run testing, PREMATH-estimated inventories typically produced material balances within 7% of the associated measured material balances for uranium and within 16% of the associated, measured material balances for thorium (a less valuable material than uranium) during steady-state process operation

  14. Statistical sampling for holdup measurement

    International Nuclear Information System (INIS)

    Picard, R.R.; Pillay, K.K.S.

    1986-01-01

    Nuclear materials holdup is a serious problem in many operating facilities. Estimating amounts of holdup is important for materials accounting and, sometimes, for process safety. Clearly, measuring holdup in all pieces of equipment is not a viable option in terms of time, money, and radiation exposure to personnel. Furthermore, 100% measurement is not only impractical but unnecessary for developing estimated values. Principles of statistical sampling are valuable in the design of cost effective holdup monitoring plans and in qualifying uncertainties in holdup estimates. The purpose of this paper is to describe those principles and to illustrate their use

  15. Annular pulse column development studies

    International Nuclear Information System (INIS)

    Benedict, G.E.

    1980-01-01

    The capacity of critically safe cylindrical pulse columns limits the size of nuclear fuel solvent extraction plants because of the limited cross-sectional area of plutonium, U-235, or U-233 processing columns. Thus, there is a need to increase the cross-sectional area of these columns. This can be accomplished through the use of a column having an annular cross section. The preliminary testing of a pilot-plant-scale annular column has been completed and is reported herein. The column is made from 152.4-mm (6-in.) glass pipe sections with an 89-mm (3.5-in.) o.d. internal tube, giving an annular width of 32-mm (1.25-in.). Louver plates are used to swirl the column contents to prevent channeling of the phases. The data from this testing indicate that this approach can successfully provide larger-cross-section critically safe pulse columns. While the capacity is only 70% of that of a cylindrical column of similar cross section, the efficiency is almost identical to that of a cylindrical column. No evidence was seen of any non-uniform pulsing action from one side of the column to the other

  16. Detection of uranium extraction zone by axial temperature profiles in a pulsed column for Purex process

    International Nuclear Information System (INIS)

    Tsukada, T.; Takahashi, K.

    1991-01-01

    A new method was presented for detecting uranium extraction zone in a pulsed column by means of measuring axial temperature profile originated from reaction heat during uranium extraction. Key parameters of the temperature profiles were estimated with a code developed for calculating temperature profiles in a direct-contact heat exchanger such as a pulsed column, and were verified using data from a small pulsed column simulating reaction heat with injecting hot water. Finally, the results were compared with those from an actual uranium extraction tests, indicating that the method presented was promising for detecting uranium extraction zone in a pulsed column. (author)

  17. Learning to live with holdup

    International Nuclear Information System (INIS)

    Pillay, K.K.S.; Picard, R.R.

    1986-06-01

    Holdup of special nuclear materials in processing facilities is recognized by facility operators and regulatory agencies as an insidious materials control and accounting problem. However, there have been few serious efforts to address holdup as a materials accounting problem and to accommodate the legitimate concerns of both groups. This paper reviews past efforts and identifies several key elements relevant to resolving the problem in a pragmatic fashion. These key elements relate to the recognition of holdup as a serious materials accounting problem, innovations in holdup monitoring and their limitations, the role of modeling and sampling in holdup estimation, and the potential value of plant-specific materials accountability requirements. Suggestions are offered for developing cost-effective procedures for holdup measurements/estimation, combining available technologies with properly designed sampling plans

  18. Alpha-contained laboratory scale pulse column facility for SRL

    International Nuclear Information System (INIS)

    Reif, D.J.; Cadieux, J.R.; Fauth, D.J.; Thompson, M.C.

    1980-01-01

    For studying solvent extraction processes, a laboratory-sized pulse column facility was constructed at the Savannah River Laboratory. This facility, in conjunction with existing miniature mixer-settler equipment and the centrifugal contactor facility currently under construction at SRL, provides capability for cross comparison of solvent extraction technology. This presentation describes the design and applications of the Pulse Column Facility at SRL

  19. Comparison of predicted and measured pulsed-column profiles and inventories

    International Nuclear Information System (INIS)

    Ostenak, C.A.; Cermak, A.F.

    1983-01-01

    Nuclear materials accounting and process control in fuels reprocessing plants can be improved by near-real-time estimation of the in-process inventory in solvent-extraction contactors. Experimental studies were conducted on pilot- and plant-scale pulsed columns by Allied-General Nuclear Service (AGNS), and the extensive uranium concentration-profile and inventory data were analyzed by Los Alamos and AGNS to develop and evaluate different predictive inventory techniques. Preliminary comparisons of predicted and measured pulsed-column profiles and inventories show promise for using these predictive techniques to improve nuclear materials accounting and process control in fuels reprocessing plants

  20. NUMATH: a nuclear material holdup estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    Krichinsky, A.M.

    1982-01-01

    NUMATH provides inventory estimation by utilizing previous inventory measurements, operating data, and, where available, on-line process measurements. For the present time, NUMATH's purpose is to provide a reasonable, near-real-time estimate of material inventory until accurate inventory determination can be obtained from chemical analysis. Ultimately, it is intended that NUMATH will further utilize on-line analyzers and more advanced calculational techniques to provide more accurate inventory determinations and estimates

  1. Standards for holdup measurement

    International Nuclear Information System (INIS)

    Zucker, M.S.

    1982-01-01

    Holdup measurement, needed for material balance, depend intensively on standards and on interpretation of the calibration procedure. More than other measurements, the calibration procedure using the standard becomes part of the standard. Standards practical for field use and calibration techniques have been developed. While accuracy in holdup measurements is comparatively poor, avoidance of bias is a necessary goal

  2. Applicability of hydroxylamine nitrate reductant in pulse-column contactors

    International Nuclear Information System (INIS)

    Reif, D.J.

    1983-05-01

    Uranium and plutonium separations were made from simulated breeder reactor spent fuel dissolver solution with laboratory-sized pulse column contactors. Hydroxylamine nitrate (HAN) was used for reduction of plutonium (1V). An integrated extraction-partition system, simulating a breeder fuel reprocessing flowsheet, carried out a partial partition of uranium and plutonium in the second contactor. Tests have shown that acceptable coprocessing can be ontained using HAN as a plutonium reductant. Pulse column performance was stable even though gaseous HAN oxidation products were present in the column. Gas evolution rates up to 0.27 cfm/ft 2 of column cross section were tested and found acceptable

  3. Development of the object-oriented analysis code for the estimation of material balance in pyrochemical reprocessing process (2). Modification of the code for the analysis of holdup of nuclear materials in the process

    International Nuclear Information System (INIS)

    Okamura, Nobuo; Tanaka, Hiroshi

    2001-04-01

    Pyrochemical reprocessing is thought to be promising process for FBR fuel cycle mainly from the economical viewpoint. However, the material behavior in the process is not clarified enough because of the lack of experimental data. The authors have been developed the object-oriented analysis code for the estimation of material balance in the process, which has the flexible applicability for the change of process flow sheet. The objective of this study is to modify the code so as to analyze the holdup of nuclear materials in the pyrochemical process from the viewpoint of safeguard, because of the possibility of larger amount of the holdup in the process compared with aqueous process. As a result of the modification, the relationship between the production of nuclear materials and its holdup in the process can be evaluated by the code. (author)

  4. Technical contribution to the project and operation of pulsed columns

    International Nuclear Information System (INIS)

    Kromek, I.B.; Ikuta, A.

    1976-01-01

    Characteristics and factors that affect the performance of pulse columms used in the purification of metals as thorium, uranium and plutonium are described. The pulse generator and the control instrumentation of these columns are also described. This report is based mainly in the works made by Richardson and Platt and adaptations realized in the project and instalation of the first compound extraction-scrub pulse column in the CEQ/IEA (Sao Paulo, Brazil) [pt

  5. SNM holdup assessment of Los Alamos exhaust ducts

    International Nuclear Information System (INIS)

    Marshall, R.S.

    1994-02-01

    Fissile material holdup in glovebox and fume hood exhaust ducting has been quantified for all Los Alamos duct systems. Gamma-based, nondestructive measurements were used to quantify holdup. The measurements were performed during three measurement campaigns. The first campaign, Phase I, provided foot-by-foot, semiquantitative measurement data on all ducting. These data were used to identify ducting that required more accurate (quantitative) measurement. Of the 280 duct systems receiving Phase I measurements, 262 indicated less than 50 g of fissile holdup and 19 indicated fissile holdup of 50 or more grams. Seven duct systems were measured in a second campaign, called Series 1, Phase II. Holdup estimates on these ducts ranged from 421 g of 235 U in a duct servicing a shut-down uranium-machining facility to 39 g of 239 Pu in a duct servicing an active plutonium-processing facility. Measurements performed in the second campaign proved excessively laborious, so a third campaign was initiated that used more efficient instrumentation at some sacrifice in measurement quality. Holdup estimates for the 12 duct systems measured during this third campaign ranged from 70 g of 235 U in a duct servicing analytical laboratories to 1 g of 235 U and 1 g of 239 Pu in a duct carrying exhaust air to a remote filter building. These quantitative holdup estimates support the conclusion made at the completion of the Phase I measurements that only ducts servicing shut-down uranium operations contain about 400 g of fissile holdup. No ventilation ducts at Los Alamos contain sufficient fissile material holdup to present a criticality safety concern

  6. Holdup measurements under realistic conditions

    International Nuclear Information System (INIS)

    Sprinkel, J.K. Jr.; Marshall, R.; Russo, P.A.; Siebelist, R.

    1997-01-01

    This paper reviews the documentation of the precision and bias of holdup (residual nuclear material remaining in processing equipment) measurements and presents previously unreported results. Precision and bias results for holdup measurements are reported from training seminars with simulated holdup, which represent the best possible results, and compared to actual plutonium processing facility measurements. Holdup measurements for plutonium and uranium processing plants are also compared to reference values. Recommendations for measuring holdup are provided for highly enriched uranium facilities and for low enriched uranium facilities. The random error component of holdup measurements is less than the systematic error component. The most likely factor in measurement error is incorrect assumptions about the measurement, such as background, measurement geometry, or signal attenuation. Measurement precision on the order of 10% can be achieved with some difficulty. Bias of poor quality holdup measurement can also be improved. However, for most facilities, holdup measurement errors have no significant impact on inventory difference, sigma, or safety (criticality, radiation, or environmental); therefore, it is difficult to justify the allocation of more resources to improving holdup measurements. 25 refs., 10 tabs

  7. Study of gas holdup and pressure characteristics in a column flotation cell using coal

    Energy Technology Data Exchange (ETDEWEB)

    Shukla, S.C.; Kundu, G.; Mukherjee, D. [Indian Institute of Technology, Kharagpur (India). Dept. of Chemical Engineering

    2010-07-15

    Present work has been carried out to observe the effect of process variables (gas flow rate, feed flow rate, solid concentration and frother concentration) on gas holdup and pressure characteristics in flotation column using coal. Gas holdup has been estimated using phase separation method while piezometers have been used to obtain column's axial pressure profile. It was observed that gas holdup in collection zone was affected by both air as well as feed flow rates. Up to 6% change in gas holdup may occur when the feed flow rate changes from 1-2 cm/s. It was also observed that addition of coal decreased the gas holdup while addition of methyl isobutyl carbinol (MIBC) had opposite effect. Almost linear variation in columns axial pressure characteristics has been observed with gas flow rate. An empirical relationship between gas holdup in the flotation column with column's axial pressure difference was developed.

  8. Effect of pulsed-column-inventory uncertainty on dynamic materials accounting

    International Nuclear Information System (INIS)

    Ostenak, C.A.

    1985-01-01

    Reprocessing plants worldwide use the Purex solvent-extraction process and pulsed-column contactors to separate and purify uranium and plutonium from spent nuclear fuels. The importance of contactor in-process inventory to dynamic materials accounting in reprocessing plants is illustrated using the Allied-General Nuclear Services Plutonium Purification Process (PPP) of the now decommissioned Barnwell Nuclear Fuels Plant. This study shows that (1) good estimates of column inventory are essential for detecting short-term losses of in-process materials, but that (2) input-output (transfer) measurement correlations limit the accounting sensitivity for longer accounting periods (greater than or equal to 1 wk for the PPP). 6 refs., 2 figs., 3 tabs

  9. A study of pulse columns for thorium fuel reprocessing

    International Nuclear Information System (INIS)

    Fumoto, H.

    1982-03-01

    Two 5 m pulse columns with the same cartridge geometries are installed to investigate the performance. The characteristic differences of the aqueous continous and the organic continuous columns were investigated experimentally. A ternary system of 30% TBP in dodecane-acetic acid-water was adopted for the mass-transfer study. It was concluded that the overall mass-transfer coefficient was independent of whether the mass-transfer is from the dispersed to the continuous phase or from the continuous to the dispersed phase. Thorium nitrate was extracted and reextracted using both modes of operation. Both HETS and HTU were obtained. The aqueous continuous column gave much shorter HTU than the organic continuous column. In reextraction the organic continuous column gave shorter HTU. The Thorex-processes for uranium and thorium co-extraction, co-stripping, and partitioning were studied. Both acid feed solution and acid deficiend feed solution were investigated. The concentration profiles along the column height were obtained. The data were analysed with McCABE-THIELE diagrams to evaluate HETS. (orig./HP) [de

  10. Liquid holdup in turbulent contact absorber

    International Nuclear Information System (INIS)

    Haq, A.; Zaman, M.; Inayat, M.H.; Chughtai, I.R.

    2009-01-01

    Dynamic liquid holdup in a turbulent contact absorber was obtained through quick shut off valves technique. Experiments were carried out in a Perspex column. Effects of liquid velocity, gas velocity, packing diameter packing density and packing height on dynamic liquid holdup were studied. Hollow spherical high density polyethylene (HDPE) balls were used as inert fluidized packing. Experiments were performed at practical range of liquid and gas velocities. Holdup was calculated on the basis of static bed height. Liquid holdup increases with increasing both liquid and gas velocities both for type 1 and type 2 modes of fluidization. Liquid holdup increases with packing density. No effect of dia was observed on liquid holdup. (author)

  11. Ultrasonic methods for locating hold-up

    International Nuclear Information System (INIS)

    Sinha, D.N.; Olinger, C.T.

    1995-01-01

    Hold-up remains one of the major contributing factors to unaccounted for materials and can be a costly problem in decontamination and decommissioning activities. Ultrasonic techniques are being developed to noninvasively monitor hold-up in process equipment where the inner surface of such equipment may be in contact with the hold-up material. These techniques may be useful in improving hold-up measurements as well as optimizing decontamination techniques

  12. Use of storage tank holdup measurements to reduce inventory differences in an ion exchange process

    International Nuclear Information System (INIS)

    Bonner, C.A.; Marshall, R.

    1986-01-01

    Inventory differences (ID) in an ion exchange process area have plagued the Los Alamos National Laboratory for years. The problem has always been attributed to plutonium precipitation in banks of horizontally oriented storage tanks; however, efforts to maintain the precipitates at low enough or even stable levels failed. Factoring tank holdup measurements into the end-of-month inventory balance would probably solve the ID problem; however, the authors were advised that gamma-based holdup measurements would yield very poor quality holdup estimates because of difficulties in determining transmission corrections and tank ''cross talk.'' When the ID problem became particularly troublesome in the spring of 1985, the authors evaluated two different gamma-based measurement techniques for estimating tank holdup. Not only did holdup estimates made by the two techniques agree, but plutonium recovered during intensive tank cleanout confirmed that the holdup measurements were of sufficient accuracy to be used for material balance adjustments. The measurement method chosen for routine use is somewhat unique since it is calibrated using tank cleanout data and requires no transmission corrections. The holdup measurements are made on a monthly basis and have dramatically reduced end-of-month inventory differences. This paper will present both a description of the measurement methodology and the inventory difference improvements

  13. Holdup measurement for nuclear fuel manufacturing plants

    International Nuclear Information System (INIS)

    Zucker, M.S.; Degen, M.; Cohen, I.; Gody, A.; Summers, R.; Bisset, P.; Shaub, E.; Holody, D.

    The assay of nuclear material holdup in fuel manufacturing plants is a laborious but often necessary part of completing the material balance. A range of instruments, standards, and a methodology for assaying holdup has been developed. The objectives of holdup measurement are ascertaining the amount, distribution, and how firmly fixed the SNM is. The purposes are reconciliation of material unbalance during or after a manufacturing campaign or plant decommissioning, to decide security requirements, or whether further recovery efforts are justified

  14. Utility of Monte Carlo Modelling for Holdup Measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Belian, Anthony P.; Russo, P. A. (Phyllis A.); Weier, Dennis R. (Dennis Ray),

    2005-01-01

    Non-destructive assay (NDA) measurements performed to locate and quantify holdup in the Oak Ridge K25 enrichment cascade used neutron totals counting and low-resolution gamma-ray spectroscopy. This facility housed the gaseous diffusion process for enrichment of uranium, in the form of UF{sub 6} gas, from {approx} 20% to 93%. Inventory of {sup 235}U inventory in K-25 is all holdup. These buildings have been slated for decontaminatino and decommissioning. The NDA measurements establish the inventory quantities and will be used to assure criticality safety and meet criteria for waste analysis and transportation. The tendency to err on the side of conservatism for the sake of criticality safety in specifying total NDA uncertainty argues, in the interests of safety and costs, for obtaining the best possible value of uncertainty at the conservative confidence level for each item of process equipment. Variable deposit distribution is a complex systematic effect (i.e., determined by multiple independent variables) on the portable NDA results for very large and bulk converters that contributes greatly to total uncertainty for holdup in converters measured by gamma or neutron NDA methods. Because the magnitudes of complex systematic effects are difficult to estimate, computational tools are important for evaluating those that are large. Motivated by very large discrepancies between gamma and neutron measurements of high-mass converters with gamma results tending to dominate, the Monte Carlo code MCNP has been used to determine the systematic effects of deposit distribution on gamma and neutron results for {sup 235}U holdup mass in converters. This paper details the numerical methodology used to evaluate large systematic effects unique to each measurement type, validates the methodology by comparison with measurements, and discusses how modeling tools can supplement the calibration of instruments used for holdup measurements by providing realistic values at well

  15. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  16. Nuclear material inventory estimation in solvent extraction contractors II

    International Nuclear Information System (INIS)

    Beyerlein, A.

    1987-11-01

    The effectiveness of near-real-time nuclear materials accounting in reprocessing facilities can be limited by inventory variations in the separations contactors. Investigations are described in three areas: (i) Improvements in the model that the authors have described previously for the steady state inventory estimation in mixer-settler contactors, (ii) extension for the model for steady state inventory estimation to transient inventory estimation for non-steady state conditions, and (iii) the development of a computer model CUSEP (Clemson University Solvent Extraction Program) for simulating the concentration profiles and nuclear material inventories in pulsed column contactors. Improvements in the steady state model that are described in this report are the simplification of the methods for evaluating model parameters and development of methods for reducing the equation which estimates the total inventory of the set of contactors directly. The pulsed column computer model CUSEP (Clemson University Solvent Extraction Program) was developed. Concentration profiles and inventories calculated from CUSEP are compared with measured data from pilot scale contactors containing uranium. Excellent agreement between measured and simulated data for both the concentration profile and inventories is obtained, demonstrating that the program correctly predicts the concentration dispersion caused by pulsing and the dispersed phase holdup within the contactor. Further research to investigate (i) correction of the MUF (Material Unaccounted For) and CUMUF (Cumulative Material Unaccounted For) tests for mixer-settler contactor inventory using the simplified model developed in this work, (ii) development of a simple inventory estimation model for pulsed column contactors similar to that developed for mixer-settler contactors using CUSEP to provide necessary database, and (iii) sources of bias appearing in the MUF and CUMUF tests using computer simulation techniques are planned. Refs

  17. Lithium-sodium separation by ion-exchange. Particular study of a pulsed column

    International Nuclear Information System (INIS)

    Auvert, H.

    1966-02-01

    A study is made of the operational conditions and constraints in the case of a moving-bed ion-exchange column subjected to pulses. The example chosen to illustrate its application concerns the lithium-sodium separation in a hydroxide medium (LiOH, NaOH). In the first part, the physico-chemical characteristics of the exchange and the kinetic characteristics of the exchange-reaction are considered. In the second part, the operation of the pulsed column is studied. Using the results obtained in the first part, the conditions required for study state operation are determined. When this is obtained, it is possible to calculate the height equivalent of the theoretical plate (HETP) of the installation. A study is also made of 'sliding', a phenomenon peculiar to pulsed columns. The results obtained show that it is possible, using laboratory tests, to determine the characteristics and the operational condition of a moving-bed ion-exchange column. (author) [fr

  18. PULSE COLUMN

    Science.gov (United States)

    Grimmett, E.S.

    1964-01-01

    This patent covers a continuous countercurrent liquidsolids contactor column having a number of contactor states each comprising a perforated plate, a layer of balls, and a downcomer tube; a liquid-pulsing piston; and a solids discharger formed of a conical section at the bottom of the column, and a tubular extension on the lowest downcomer terminating in the conical section. Between the conical section and the downcomer extension is formed a small annular opening, through which solids fall coming through the perforated plate of the lowest contactor stage. This annular opening is small enough that the pressure drop thereacross is greater than the pressure drop upward through the lowest contactor stage. (AEC)

  19. Calculation code of mass and heat transfer in a pulsed column for Purex process

    International Nuclear Information System (INIS)

    Tsukada, Takeshi; Takahashi, Keiki

    1993-01-01

    A calculation code for extraction behavior analysis in a pulsed column employed at an extraction process of a reprocessing plant was developed. This code was also combined with our previously developed calculation code for axial temperature profiles in a pulsed column. The one-dimensional dispersion model was employed for both of the extraction behavior analysis and the axial temperature profile analysis. The reported values of the fluid characteristics coefficient, the transfer coefficient and the diffusivities in the pulsed column were used. The calculated concentration profiles of HNO 3 , U and Pu for the steady state have a good agreement with the reported experimental results. The concentration and temperature profiles were calculated under the operation conditions which induce the abnormal U extraction behavior, i.e. U extraction zone is moved to the bottom of the column. Thought there is slight difference between calculated and experimental value, it is appeared that our developed code can be applied to the simulation under the normal operation condition and the relatively slowly transient condition. Pu accumulation phenomena was analyzed with this code and the accumulation tendency is similar to the reported analysis results. (author)

  20. Studies on the hydrodynamic properties of the sieve plate pulsed column for 30% TRPO-kerosene/nitric acid system

    International Nuclear Information System (INIS)

    Ma Ronglin; Chen Jing; Xu Shiping; Wu Qiulin; Tai Derong; Song Chongli

    2000-01-01

    The hydrodynamic properties of the sieve plate pulsed column for 30% TRPO-kerosene/nitric acid system is studied. With the organic phase or aqueous phase as the continuous one, the dispersed phase behaves mainly as coalescing or dispersing, respectively. The sieve plate pulsed column has a fairish flooding throughput for this system. Under the same pulsation intensity, the flooding throughput for the organic phase as the continuous one is more than that for the aqueous phase as the continuous one

  1. Continuous Holdup Measurements with Silicon P-I-N Photodiodes

    International Nuclear Information System (INIS)

    Bell, Z.W.; Oberer, R.B.; Williams, J.A.; Smith, D.E.; Paulus, M.J.

    2002-01-01

    We report on the behavior of silicon P-I-N photodiodes used to perform holdup measurements on plumbing. These detectors differ from traditional scintillation detectors in that no high-voltage is required, no scintillator is used (gamma and X rays are converted directly by the diode), and they are considerably more compact. Although the small size of the diodes means they are not nearly as efficient as scintillation detectors, the diodes' size does mean that a detector module, including one or more diodes, pulse shaping electronics, analog-to-digital converter, embedded microprocessor, and digital interface can be realized in a package (excluding shielding) the size of a pocket calculator. This small size, coupled with only low-voltage power requirement, completely solid-state realization, and internal control functions allows these detectors to be strategically deployed on a permanent basis, thereby reducing or eliminating the need for manual holdup measurements. In this paper, we report on the measurement of gamma and X rays from 235 U and 238 U contained in steel pipe. We describe the features of the spectra, the electronics of the device and show how a network of them may be used to improve estimates of inventory in holdup

  2. Purex pulse column designs for capacity factor of 3.0 to 3.5

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, G.L.

    1955-04-12

    This memorandum indicates the Purex-Plant pulse-column and pulse- generator revisions which would be required to assure an instantaneous capacity of 25 tons U/day with a 20% capacity safety margin under Purex HW {number_sign}3 Flowsheet conditions. (The use of the Purex HW {number_sign}4 Flowsheet (6) with the revised columns would be expected to increase the capacity to 29 or 30 tons U/day.) The indicated design changes are recorded here for study and for possible reference if need for increased production capacity should arise. No recommendation for adoption at this time is made.

  3. Use of a pulsed column with discs and crowns for uranium extraction from phosphoric acid

    International Nuclear Information System (INIS)

    1982-04-01

    The physico-chemistry of the system phosphoric acid-uranium-dioctylpyrophosphoric acid is studied for the determination of analytical methods and extraction parameters (oxidation state of uranium and iron, phosphorus concentration, extractant concentration). Extraction is then realized on a pilot scale with a liquid-liquid extraction column 4m high and of 50 mm in diameter with a column packing made of discs and crowns. Column efficiency is evaluated by studying uranium transfer as a function of operating conditions. The results obtained are extrapolated to an industrial scale and a comparative economic evaluation is made between a pulsed column and a mixer-settler [fr

  4. Use of a pulsed column contactor as a continuous oxalate precipitation reactor

    International Nuclear Information System (INIS)

    Borda, Gilles; Brackx, Emmanuelle; Boisset, Laurence; Duhamet, Jean; Ode, Denis

    2011-01-01

    Research highlights: → A new type of continuous precipitating device was patented by CEA and tested with reaction between a surrogate nitrate cerium(III) or neodymium(III) and oxalate complexing agent. → Precipitate is confined in aqueous phase emulsion in tetrapropylene hydrogen and does not form deposit on the vessel walls. → Measure size of the precipitate ranges from 20 to 40 μm, it meets the process requirements to filter, and the precipitation reaction is complete. → The laboratory design can be extrapolated to an industrial uranium(IV) and minor actinide(III) coprecipitating column. - Abstract: The current objective of coprecipitating uranium, and minor actinides in order to fabricate a new nuclear fuel by direct (co)precipitation for further transmutation, requires to develop specific technology in order to meet the following requirements: nuclear maintenance, criticity, and potentially high flowrates due to global coprecipitation. A new type of device designed and patented by the CEA was then tested in 2007 under inactive conditions and with uranium. The patent is for organic confinement in a pulsed column (PC). Actually, pulsed columns have been working for a long time in a nuclear environment, as they allow high capacity, sub-critical design (annular geometry) and easy high activity maintenance. The precipitation reaction between the oxalate complexing agent and a surrogate nitrate - cerium(III) or neodymium(III) alone, or coprecipitated uranium(IV) and cerium(III) - occurs within an emulsion created in the device by these two phases flowing with a counter-current chemically inert organic phase (for example tetrapropylene hydrogen-TPH) produced by the stirring action of the column pulsator. The precipitate is confined and thus does not form deposits on the vessel walls (which are also water-repellent); it flows downward by gravity and exits the column continuously into a settling tank. The results obtained for precipitation of cerium or

  5. Interfacial shear stress and hold-up in an air-water annular two-phase flow

    International Nuclear Information System (INIS)

    Fukano, T.; Ousaka, A.; Kawakami, Y.; Tominaga, A.

    1991-01-01

    This paper reports on an experimental investigation that was made into hold-up, frictional pressure drop and interfacial shear stress of an air-water two-phase annular flow in horizontal and vertical up- and downward flows to make clear the effects of tube diameter and flow direction on them. The tube diameters examined are 10mm, 16mm and 26mm. Both the hold-up and the pressure drop considerably changed with time. Especially, the amplitude of the variation of the hold-up was quite larger in comparison with its averaged value in the cause of disturbance wave flow. for the time averaged hold-up and interfacial friction factor, we got new correlations, by which we can estimate them within an accuracy of ±20% and ±30%, respectively, independent of the flow direction and the tube diameter

  6. Simulation and control synthesis for a pulse column separation system for plutonium--uranium recovery

    International Nuclear Information System (INIS)

    McCutcheon, E.B.

    1975-05-01

    Control of a plutonium-uranium partitioning column was studied using a mathematical model developed to simulate the dynamic response and to test postulated separation mechanisms. The column is part of a plutonium recycle flowsheet developed for the recovery of plutonium and uranium from metallurgical scrap. In the first step of the process, decontamination from impurities is achieved by coextracting plutonium and uranium in their higher oxidation states. In the second step, reduction of the plutonium to a lower oxidation state allows partitioning of the plutonium and uranium. The use of hydroxylamine for the plutonium reduction in this partitioning column is a unique feature of the process. The extraction operations are carried out in pulse columns. (U.S.)

  7. Description of design and operating procedures of small scale pulsed columns for experimental study on extraction process under abnormal conditions

    International Nuclear Information System (INIS)

    Wakamatsu, Sachio; Sato, Makoto; Kubo, Nobuo; Sakurai, Satoshi; Ami, Norio

    1990-09-01

    To study transient phenomena in a pulsed column co-decontamination process under abnormal conditions, a pair of small scale pulsed columns (effective extraction section; I.D: 25 mm, H.: 2260 mm) for extraction and scrub were installed in the laboratory. An evaporator of aqueous uranium solution was also equipped to reuse concentrated solution as the feed. This report describes several items to have been carefully treated in design, specification and operating procedure of the apparatuses for the experiments. Also described are the procedures for preparation of the feed solutions and treatments of the solutions after the experiments; back-extraction of uranium, diluent washing, alkaline washing and concentration of uranium solution. (author)

  8. Summary report of phase I residual holdup measurements for a mixed oxide fuel fabrication facility

    International Nuclear Information System (INIS)

    Woodsum, H.C.

    1978-03-01

    Metal surface-powder adherence tests showed that the average mean values for direct impingement was 60 to 80 g/ft 2 , whereas the average mean values of the colloidal samples were 0.2 to 2.5 g/ft 2 . Thus, it is advantageous to design powder processing equipment to reduce direct impingement wherever possible. Holdup of powder appears to be relatively independent of the surface material or finish, and it is reduced significantly by low-frequency vibration of the surface. Under colloidal conditions, ThO 2 produces more residual material than UO 2 and is preferentially deposited from a UO 2 --ThO 2 blend. Pure ThO 2 and high enrichment blends of ThO 2 in UO 2 are expected to produce a significant, persistent electrostatic charge, thus increasing residual holdup. Residual holdup in the clean scrap recovery system (CSRS) could be reduced by 25%. Comparison of CSRS holdup and powder adherence-metal surface data indicated that the areal density of residual material (40 g/ft 2 ) was considerably higher than for colloidal suspension ( 2 ). Steady-state residual holdup factor for sintered-metal filters was 13 g/ft 2 of filter surface area under optimum conditions. During the pellet grinding tests, residual material built up in the system at rate of about 100 g/h to an estimated limit of 1.4 kg, primarily within the particle collector shroud. During dry grinding, 97% of this residue was contained within the shroud, and during wet grinding only 50% was contained in the shroud owing to inertial effects of the rotating wheel and water coolant

  9. ISOL yield predictions from holdup-time measurements

    International Nuclear Information System (INIS)

    Spejewski, Eugene H.; Carter, H Kennon; Mervin, Brenden T.; Prettyman, Emily S.; Kronenberg, Andreas; Stracener, Daniel W

    2008-01-01

    A formalism based on a simple model is derived to predict ISOL yields for all isotopes of a given element based on a holdup-time measurement of a single isotope of that element. Model predictions, based on parameters obtained from holdup-time measurements, are compared to independently-measured experimental values

  10. Capillary holdup between vertical spheres

    Directory of Open Access Journals (Sweden)

    S. Zeinali Heris

    2009-12-01

    Full Text Available The maximum volume of liquid bridge left between two vertically mounted spherical particles has been theoretically determined and experimentally measured. As the gravitational effect has not been neglected in the theoretical model, the liquid interface profile is nonsymmetrical around the X-axis. Symmetry in the interface profile only occurs when either the particle size ratio or the gravitational force becomes zero. In this paper, some equations are derived as a function of the spheres' sizes, gap width, liquid density, surface tension and body force (gravity/centrifugal to estimate the maximum amount of liquid that can be held between the two solid spheres. Then a comparison is made between the result based on these equations and several experimental results.

  11. 235U Holdup Measurement Program in support of facility shutdown

    International Nuclear Information System (INIS)

    Thomason, R.S.; Griffin, J.C.; Lien, O.G.; McElroy, R.D.

    1991-01-01

    In 1989, the Department of Energy directed shutdown of an enriched uranium processing facility at Savannah River Site. As part of the shutdown requirements, deinventory and cleanout of process equipment and nondestructive measurement of the remaining 235 U holdup were required. The holdup measurements had safeguards, accountability, and nuclear criticality safety significance; therefore, a technically defensible and well-documented holdup measurement program was needed. Appropriate standards were fabricated, measurement techniques were selected, and an aggressive schedule was followed. Early in the program, offsite experts reviewed the measurement program, and their recommendations were adopted. Contact and far-field methods were used for most measurements, but some process equipment required special attention. All holdup measurements were documented, and each report was subjected to internal peer review. Some measured values were checked against values obtained by other methods; agreement was generally good

  12. Comparison of the performance of full scale pulsed columns vs. mixer-settlers for uranium solvent extraction

    International Nuclear Information System (INIS)

    Movsowitz, R.L.; Kleinberger, R.; Buchalter, E.M.; Grinbaum, B.

    2000-01-01

    A rare opportunity arose to compare the performance of Bateman Pulsed Columns (BPC) vs. Mixer-Settlers at an industrial site, over a long period, when the Uranium Solvent Extraction Plant of WMC at Olympic Dam, South Australia was upgraded. The original plant was operated for years with two trains of 2-stage mixer-settler batteries for the extraction of uranium. When the company decided to increase the yield of the plant, the existing two trains of mixer-settlers for uranium extraction were arranged in series, giving one 4-stage battery. In parallel, two Bateman Pulsed Columns, of the disc-and-doughnut type, were installed to compare the performance of both types of equipment over an extended period.The plant has been operating in parallel for three years and the results show that the performance of the columns is excellent: the extraction yield is similar to the 4 mixer-settlers in series - about 98%, the entrainment of solvent is lower, there are less mechanical failures, less problems with crud, smaller solvent losses and the operation is simpler. The results convinced WMC to install an additional 10 BPC's for the expansion of their uranium plant. These columns were successfully commissioned early 1999. This paper includes quantitative comparison of both types of equipment. (author)

  13. Contribution to the study of wettability in a pulsed column dedicated to the production of a precipitate

    International Nuclear Information System (INIS)

    Picard, R.

    2011-01-01

    The process dedicated to the oxalic precipitation of plutonium is very sensitive to the high sticking behavior of the produced precipitates. Therefore, the laboratory of 'Genie Chimique et Instrumentation' based in Marcoule in France puts forward the idea of carrying out the process in a pulsed column. In this way, the precipitate is confined inside in the droplets of the emulsion, far from the surfaces of the apparatus. Nevertheless, if those surfaces are made of stainless steel, the fouling of the column is inevitably observed. The thesis also introduces the concepts and tools dedicated to a fine understanding of the fouling issue. Though the work carried out scans the whole issue, the thesis mainly focuses on drop bouncing. The results provide experimental data in a low studied configuration and target the key parameters driving the bounce. The practical applications of those results point out that using an un-optimized stainless steel pulsed column for the precipitation of radionuclides does not prevent from fouling. The process could still be carried out using another technology patented during the PhD. This last point needs more investigations. Especially, the CEA has to work on the scale-up steps to design an apparatus which could be able to process industrial flow rates. Besides, this might be an interesting issue in process engineering. (author) [fr

  14. Validation of the design of small diameter pulsed columns for the process line DRA. Tests reliability compared with the industrial scale

    International Nuclear Information System (INIS)

    Leybros, J.

    2000-01-01

    As part of the Spin program related to the management of nuclear wastes, studies have been undertaken to develop partitioning processes like Diamex process. The process line CCBP/DRA in Atalante facility forms one of the main equipment devoted to these studies. On this line industrial apparatus are used but some like pulsed columns need to be adapted because of the specificity of the installation: limiting amount of nuclear matter, gaseous waste minimization, safety, limiting amounts of new extractants,... This article presents the comparison of 2 air pulsed columns, one with a standard diameter of 25 (DN25), the other with a reduced diameter of 15 (DN15). This comparison is based on 3 main criteria: pulsation capability, superficial throughput and mass transfer efficiency. The overall comparison shows that a DN15 pulsed column can be considered as a representative tool of research and development. Particularly, the study demonstrates the possibility of scaling up the results

  15. Development of holdup monitor system (HMOS) during facility maintenance

    International Nuclear Information System (INIS)

    Nakamura, Hironobu; Hosoma, Takashi; Tanaka, Izumi

    1999-01-01

    Holdup MOnitor System (HMOS) was developed for the purpose of verifying the constant holdup during facility maintenance in Plutonium Conversion Development Facility (PCDF). The glove box assay system (GBAS; big slab) has been used by inspectors, measures the holdup periodically (i.e. IIV) using coincidence counting. The GBAS couldn't be used for inspection during maintenance period. Because many glove boxes (GB) set in process area had been occupied by large vinyl green-houses due to maintenance. We aimed that the holdup except for the maintenance GB should be constant during maintenance period, the HMOSs were set to 3 GBs. The system had been used from June '98 to July '99 for verification. The HMOS detector is located top/bottom of the GB, counts total neutron variation in the GB continuously. Detector efficiencies are 1.2%(top) and 0.12%(bottom), respectively. The measurement variation is observed up to 1.5%(3σ). The HMOS has high sensitivity 8 to 90g Pu (3σ; In case of 1kg Pu holdup, the sensitivity depends on position in the GB). The movement of equipment or nuclear material from/in the GB can be detected effectively. Though the HMOS observes measurement variation related to humidity in the GB, hygroscopic effect of denitration MOX powder, material/equipment movement and mainly 241 Pu nuclear decay, this system can verify that the holdup is constant qualitatively. As a result, in PCDF, safeguard related to the inventory verification during maintenance period (more than 1 year) were successfully performed using holdup monitor system. (author)

  16. Low-enriched uranium holdup measurements in Kazakhstan

    International Nuclear Information System (INIS)

    Barham, M.A.; Ceo, R.; Smith, S.E.

    1998-01-01

    Quantification of the residual nuclear material remaining in process equipment has long been a challenge to those who work with nuclear material accounting systems. Fortunately, nuclear material has spontaneous radiation emissions that can be measured. If gamma-ray measurements can be made, it is easy to determine what isotope a deposit contains. Unfortunately, it can be quite difficult to relate this measured signal to an estimate of the mass of the nuclear deposit. Typically, the measurement expert must work with incomplete or inadequate information to determine a quantitative result. Simplified analysis models, the distribution of the nuclear material, any intervening attenuation, background(s), and the source-to-detector distance(s) can have significant impacts on the quantitative result. This presentation discusses the application of a generalized-geometry holdup model to the low-enriched uranium fuel pellet fabrication plant in Ust-Kamenogorsk, Kazakhstan. Preliminary results will be presented. Software tools have been developed to assist the facility operators in performing and documenting the measurements. Operator feedback has been used to improve the user interfaces

  17. Study and modeling of the dispersed phase behavior in a pulsed column: application to an oxalic precipitation process

    International Nuclear Information System (INIS)

    Amokrane, Abdenour

    2014-01-01

    The thesis focuses on the study and modeling of a pulsed column used in liquid-liquid extraction operations in the nuclear industry and which is otherwise considered for continuous precipitation operations in emulsion. Modeling the behavior of the dispersed phase in the column is undertaken in this manuscript. First, we began by modeling the continuous phase mean velocity and turbulence fields, which are responsible for transport, breakage and coalescence of the drops. The model developed, validated by PIV measurements, predicts turbulence in a satisfying way. Modeling the residence time distribution (RTD) of the drops by a Lagrangian approach is then achieved. This model is validated on measurements taken by a shadow-graph technique. The simulation results are in good agreement with the experimental ones. To model the droplet size distributions (DSD) in the column, we used the population balance equations (PBE) that we have coupled with the computational fluid dynamics equations (CFD). A continuously stirred tank reactor (CSTR) with an optical sensor is used, at first, to acquire the DSD representative of our liquid-liquid system. Through a 0D modeling of the flow in the CSTR, and solving the inverse problem, we have determined the breakage and coalescence kernels relevant for our system, to be used in the PBE. These kernels were then used to predict the DSD in the pulsed column by a coupled CFD-PBE model based on the QMOM method. Finally, a validation of the coupled CFD-PBE model is made from DSD in good agreement with the experimental data both qualitatively and quantitatively. The validated model is then used to study the emulsion sensitivity to the column operating conditions. (author) [fr

  18. HEU Holdup Measurements on 321-M A-Lathe

    International Nuclear Information System (INIS)

    Dewberry, R.A.

    2002-01-01

    The Analytical Development Section of SRTC was requested by the Facilities Disposition Division (FDD) of the Savannah River Site to determine the holdup of enriched uranium in the 321-M facility as part of an overall deactivation project of the facility. The 321-M facility was used to fabricate enriched uranium fuel assemblies, lithium-aluminum target tubes, neptunium assemblies, and miscellaneous components for the production reactors. The results of the holdup assays are essential for determining compliance with the solid waste Waste Acceptance Criteria, Material Control and Accountability, and to meet criticality safety controls. Three measurement systems were used to determine highly enriched uranium (HEU) holdup. This report covers holdup measurements on the A-Lathe that was used to machine uranium-aluminum-alloy (U-Al). Our results indicated that the lathe contained more than the limits stated in the Waste Acceptance Criteria (WAC) for the solid waste E-Area Vaults. Thus the lathe was decontaminated three times and assayed four times in order to bring the amounts of uranium to an acceptable content. This report will discuss the methodology, Non-Destructive Assay (NDA) measurements, and results of the U-235 holdup on the lathe

  19. Holdup-related issues in safeguarding of nuclear materials

    International Nuclear Information System (INIS)

    Pillay, K.K.S.

    1988-03-01

    Residual inventories of special nuclear materials (SNM) remaining in processing facilities (holdup) are recognized as an insidious problem for both safety and safeguards. This paper identifies some of the issues that are of concern to the safeguards community at-large that are related to holdup of SNM in large-scale process equipment. These issues range from basic technologies of SNM production to changing regulatory requirements to meet the needs of safeguarding nuclear materials. Although there are no magic formulas to resolve these issues, there are several initiatives that could be taken in areas of facility design, plant operation, personnel training, SNM monitoring, and regulatory guidelines to minimize the problems of holdup and thereby improve both safety and safeguards at nuclear material processing plants. 8 refs

  20. Determination of the radioactive material and plutonium holdup in ducts and piping in the 325 Building

    International Nuclear Information System (INIS)

    Haggard, D.L.; Tanner, J.E.; Tomeraasen, P.L.

    1996-08-01

    This report describes the measurements performed to determine the radionuclide content and mass of Pu in exposed ducts, filters, and piping in the 325 Building at the Hanford Site. This information is needed to characterize facility radiation levels, to verify compliance with criticality safety specifications, and to allow more accurate nuclear material control using nondestructive assay. Gamma assay was used to determine the gamma-emitting isotopes in ducts, filters, and piping. Passive neutron counting was used to estimate the Pu content. A high-purity Ge detector and a neutron slab detector containing 5 3 He proportional counters were used. Almost all the gamma activity is from 137 Cs and 60 Co. Estimated Pu mass gram equivalents in the basement ductwork and filters are 31 g; the radioactive liquid waste system (RLWS) line has 12 g; the laboratory vacuum system has 2 g equiv. Pu; the retention process sewer has 3 g. Total Pu mass holdup for basement areas range from 48 to 27 g. Estimated Pu mass gram equivalents for all laboratories range from 385 to 581 g. Individual laboratory estimates are tabulated. Total estimated Pu gram equivalent holdup and material in process for the facility is 410 g. In summary, results indicate that no significant Pu levels, from a criticality safety perspective, reside in the ductwork, laboratory vacuum system lines, RLWS pipes, or any one laboratory in the 325 Building

  1. Axial and Radial Gas Holdup in Bubble Column Reactor

    International Nuclear Information System (INIS)

    Wagh, Sameer M.; Ansari, Mohashin E Alan; Kene, Pragati T.

    2014-01-01

    Bubble column reactors are considered the reactor of choice for numerous applications including oxidation, hydrogenation, waste water treatment, and Fischer-Tropsch (FT) synthesis. They are widely used in a variety of industrial applications for carrying out gas-liquid and gas-liquid-solid reactions. In this paper, the computational fluid dynamics (CFD) model is used for predicting the gas holdup and its distribution along radial and axial direction are presented. Gas holdup increases linearly with increase in gas velocity. Gas bubbles tends to concentrate more towards the center of the column and follows a wavy path

  2. Bi-Modal Model for Neutron Emissions from PuO2 and MOX Holdup

    International Nuclear Information System (INIS)

    Menlove, Howard; Lafleur, Adrienne

    2015-01-01

    ,n) yields as a function of holdup deposit thickness, we have used MCNPX calculations to estimate the absorption of alpha particles in PuO 2 holdup deposits. The powder thickness was varied from 0.1 μm to 5000 μm and the alpha particle escape probability was calculated. As would be expected, as the thickness approaches zero, the escape probability approaches 1.0, and as the thickness gets much greater than the alpha particle range (∼50 μm), the escape probability becomes small. Typically, the neutron holdup calibration measurement are performed using sealed containers of thick MOX that has all 3 sources of neutrons [SF, (α,n), and M], and no significant impurities. Thus, the calibration counting rates need to include corrections for M and (α,n) yields that are different for the holdup compared with the calibration samples. If totals neutron counting is used for the holdup measurements, the variability of the (α,n) term needs to be considered

  3. Bi-Modal Model for Neutron Emissions from PuO{sub 2} and MOX Holdup

    Energy Technology Data Exchange (ETDEWEB)

    Menlove, Howard; Lafleur, Adrienne [Los Alamos National Laboratory, Safeguard Science and Technology Group, NEN-1, MS E540, Los Alamos, NM, 87545 (United States)

    2015-07-01

    the neutron (α,n) yields as a function of holdup deposit thickness, we have used MCNPX calculations to estimate the absorption of alpha particles in PuO{sub 2} holdup deposits. The powder thickness was varied from 0.1 μm to 5000 μm and the alpha particle escape probability was calculated. As would be expected, as the thickness approaches zero, the escape probability approaches 1.0, and as the thickness gets much greater than the alpha particle range (∼50 μm), the escape probability becomes small. Typically, the neutron holdup calibration measurement are performed using sealed containers of thick MOX that has all 3 sources of neutrons [SF, (α,n), and M], and no significant impurities. Thus, the calibration counting rates need to include corrections for M and (α,n) yields that are different for the holdup compared with the calibration samples. If totals neutron counting is used for the holdup measurements, the variability of the (α,n) term needs to be considered.

  4. The myth of the early aviation patent hold-up

    DEFF Research Database (Denmark)

    Katznelson, Ron D; Howells, John

    2015-01-01

    The prevailing historical accounts of the formation of the U.S. aircraft “patent pool” in 1917 assume the U.S. Government necessarily intervened to alleviate a patent hold-up among private aircraft manufacturers. We show these accounts to be inconsistent with the historical facts. We show that de...

  5. Methods for nondestructive assay holdup measurements in shutdown uranium enrichment facilities

    International Nuclear Information System (INIS)

    Hagenauer, R.C.; Mayer, R.L. II.

    1991-09-01

    Measurement surveys of uranium holdup using nondestructive assay (NDA) techniques are being conducted for shutdown gaseous diffusion facilities at the Oak Ridge K-25 Site (formerly the Oak Ridge Gaseous Diffusion Plant). When in operation, these facilities processed UF 6 with enrichments ranging from 0.2 to 93 wt % 235 U. Following final shutdown of all process facilities, NDA surveys were initiated to provide process holdup data for the planning and implementation of decontamination and decommissioning activities. A three-step process is used to locate and quantify deposits: (1) high-resolution gamma-ray measurements are performed to generally define the relative abundances of radioisotopes present, (2) sizable deposits are identified using gamma-ray scanning methods, and (3) the deposits are quantified using neutron measurement methods. Following initial quantitative measurements, deposit sizes are calculated; high-resolution gamma-ray measurements are then performed on the items containing large deposits. The quantitative estimates for the large deposits are refined on the basis of these measurements. Facility management is using the results of the survey to support a variety of activities including isolation and removal of large deposits; performing health, safety, and environmental analyses; and improving facility nuclear material control and accountability records. 3 refs., 1 tab

  6. Implementation of dynamic cross-talk correction (DCTC) for MOX holdup assay measurements among multiple gloveboxes

    International Nuclear Information System (INIS)

    Nakamichi, Hideo; Nakamura, Hironobu; Mukai, Yasunobu; Kurita, Tsutomu; Beddingfield, David H.

    2012-01-01

    Plutonium holdup in gloveboxes (GBs) are measured by (passive neutron based NDA (HBAS) for the material control and accountancy (MC and A) at Plutonium Conversion Development Facility (PCDF). In the case that the GBs are installed close to one another, the cross-talk which means neutron double counting among GBs should be corrected properly. Though we used to use predetermined constants as the cross-talk correction, a new correction methodology for neutron cross-talk among the GBs with inventory changes is required for the improvement of MC and A. In order to address the issue of variable cross-talk contributions to holdup assay values, we applied a dynamic cross-talk correction (DCTC) method, based on the distributed source-term analysis approach, to obtain the actual doubles derived from the cross-talk between multiple GBs. As a result of introduction of DCTC for HBAS measurement, we could reduce source biases from the assay result by estimating the reliable doubles-counting derived from the cross-talk. Therefore, we could improve HBAS measurement uncertainty to a half of conventional system, and we are going to confirm the result. Since the DCTC methodology can be used to determine the cross-correlation among multiple inventories in small areas, it is expected that this methodology can be extended to the knowledge of safeguards by design. (author)

  7. Axial holdup in pulsed perforated-plate column of pulser feeder type, (2)

    International Nuclear Information System (INIS)

    Ikeda, Hidematsu; Suzuki, Atsuyuki; Kiyose, Ryohei.

    1987-01-01

    In mathematical models for a pulsed perforated-plate column, the dispersed phase holdup has been considered to be uniform throughout the length of the column, but fairly recently it is treated as being nonuniform. In the previous paper, the axial holdup data were obtained in the dispersed aqueous and the dispersed organic modes. Experimental results showed that the axial holdup data become nonuniform throughout the column. It was also found that both of the plate type and the operation mode affected the axial holdup distribution. The present work is an attempt to formulate the axial holdup by means of a heuristic selforganization method that provides a nonlinear prediction model of complex system, since the holdup data did not directly show so significant trend as to formulate the axial holdup. The Group Method of Data Handling (GMDH) is used for this purpose. The GMDH can be used for selection and synthesis of input variables concerned with the axial holdup for the pulsed perforated-plate column. The axial holdup data have been successfully correlated and the identification models could be useful in discussing mathematical models. (author)

  8. A Next-Generation Automated Holdup Measurement System (HMS-5)

    International Nuclear Information System (INIS)

    Gariazzo, Claudio Andres; Smith, Steven E.; Solodov, Alexander A

    2007-01-01

    Holdup Measurement System 4 software (HMS4) has been in use at facilities to systematically measure and verify the amounts of uranium holdup in process facilities under safeguards since its release in 2004. It is a system for measuring uranium and plutonium and archiving holdup data (via barcoded locations with information) which is essential for any internationally safeguarded facility to monitor all amounts of residual special nuclear material (SNM). Additionally, HMS4 has been tested by sites in Russia, the United States, South Africa, and China for more effective application. Comments and lessons learned have been received over time and an updated version of the software would enable the international partners to use a wider variety of commercial equipment existing at these facilities. In June 2005, the Oak Ridge National Laboratory (ORNL) and Los Alamos National Laboratory conducted a holdup measurement training course on HMS4 for subject matter experts from the Ulba Metallurgical Facility at Ust-Kamenogorsk, Kazakhstan, which included an additional external software package for improved measurements of low-enriched uranium by using higher energy gamma-rays more readily found. Due to not being currently integrated into HMS4, it would be greatly beneficial to include this application in the next generation HMS software package (HMS-5). This software system upgrade would assist the International Atomic Energy Agency (IAEA) in having a more comprehensive software package and having it tested at several safeguarded locations. When released, HMS4 only supported AMETEK/ORTEC equipment despite many facilities currently utilizing Canberra Industries technology (detectors, multi-channel analyzers, other hardware, and software packages). For HMS-5 to support all available hardware systems and to benefit the majority of international partners and the IAEA, Canberra technology must be integrated because of such widespread use of its hardware. Furthermore, newly developed

  9. Holdup of O/W emulsion in a packed column for liquid membrane separation of hydrocarbon; Tankasuiso no ekimaku bunri ni okeru jutento nai no emarushon horudo appu

    Energy Technology Data Exchange (ETDEWEB)

    Egashira, R.; Sugimoto, T.; Kawasaki, J. [Tokyo Inst. of Technology, Tokyo (Japan). Faculty of Engineering

    1993-07-10

    Liquid membrane separation of hydrocarbon is an energy saving separation method that is expected of practical use. If the method uses a packed column, the holdup of O/W emulsion affects the effective contact area and residence time of the emulsion. Therefore, this paper describes an attempt to correlate the dynamic emulsion holdup in a packed column in liquid membrane separation of hydrocarbon with property values of the emulsion and external oil phase, and operation variables. The experiment used a mixture of toluene + n - heptane + n - decane for oil phase in the O/W emulsion and saponin aqueous solution for liquid phase (liquid membrane phase). The packed column with an inner diameter of 37 mm used stainless steel McMahon packing. As a result of the experiment, the dynamic emulsion holdup showed a correlation according to the Reynolds number and Galilei number, regardless of whether the emulsion permeates the liquid membrane. The correlation made it possible to estimate in a simple manner the emulsion holdup in the packed column when this separation method is used. 5 refs., 3 figs., 3 tabs.

  10. In line digital holography measurement for liquid-liquid flow: application to the characterization of emulsions produced in pulsed column

    International Nuclear Information System (INIS)

    Lamadie, F.

    2013-01-01

    Several processes used in research and industry are based on liquid-liquid extraction, a method designed for selective separation of products in a mixture. In liquid-liquid extraction, two immiscible liquids are contacted: an aqueous phase and an organic phase, one of which generally contains an extractant molecule capable of transferring the desired elements to the other phase. The transfer occurs at the contact surface between the two phases. After transfer, both phases are separated by settling. In practice, these operations are performed in industrial apparatus. In order to optimize the operation of these devices, it's important to determine the fundamental characteristics of the emulsion. These include parameters related to the fluid flow velocity as well as parameters related to fluid mixing such as the interfacial area, hold-up, and size distribution of the droplets population. Numerous imaging techniques can be used to measure these parameters. One of them, digital holography, is well-known for allowing complete reconstruction of information about a 3D flow in a single shot. This PhD work deals with a direct application of digital in line holography to droplets rising in a continuous liquid phase. The droplet size imposes a regime of intermediate-field diffraction hardly explored to date. Acquired diffraction patterns show that the usual dark disk model is not valid and that good agreement is obtained with a mixed model coupling thin lens with opaque disk. Hologram focusing is nevertheless performed with a dedicated automated method. A literature review has been conducted to identify the sharpest auto-focus function for our application. In a second step, in order to measure high retention rates, an inverse problem approach is applied on all the outliers and missing droplets. This hologram restitution treatment has been applied to experimental results with a comparison to independent measurements. The main results obtained with calibrated droplets are

  11. Tritium inventory differences: I. Sampling and U-getter pump holdup

    International Nuclear Information System (INIS)

    Ellefson, R.E.; Gill, J.T.

    1986-01-01

    Inventory differences (ID) in tritium material balance accounts (MBA) can occur with unmeasured transfers from the process or unmeasured holdup in the system. Small but cumulatively significant quantities of tritium can leave the MBA by normal capillary sampling of process gas operation. A predictor model for estimating the quantity of tritium leaving the MBA by sampling has been developed and implemented. The model calculates the gas transferred per sample; using the tritium concentration in the process and the number of samples, a quantity of tritium transferred is predicted. Verification of the model is made by PVT measurement of process transfer from multiple samplings. Comparison of predicted sample transfers with IDs from several MBA's reveals that sampling typically represents 50% of unmeasured transfers for regularly sampled processes

  12. A parametric study pf powder holdups in a packed bed under ...

    African Journals Online (AJOL)

    More specifically, a parametric study is performed to determine the effects of the gas blast velocity, particle size adn powder loading on the powder holdups. Results are presented in terms of fines accumulation area. This work shows the dependency of the powder holdups on the packed bed flow parameters. Keywords: ...

  13. Holdup Measures on an SRNL Mossbauer Spectroscopy Instrument

    Energy Technology Data Exchange (ETDEWEB)

    Dewberry, R.; Brown, T.; Salaymeh, S.

    2010-05-05

    Gamma-ray holdup measurements of a Mossbauer spectroscopy instrument are described and modeled. In the qualitative acquisitions obtained in a low background area of Savannah River National Laboratory, only Am-241 and Np-237 activity were observed. The Am-241 was known to be the instrumental activation source, while the Np-237 is clearly observed as a source of contamination internal to the instrument. The two sources of activity are modeled separately in two acquisition configurations using two separate modeling tools. The results agree well, demonstrating a content of (1980 {+-} 150) {mu}Ci Am-241 and (110 {+-} 50) {mu}Ci of Np-237.

  14. Inefficient Job Destructions and Training with Hold-up

    DEFF Research Database (Denmark)

    Chéron, Arnaud; Rouland, Benedicte

    2011-01-01

    This paper develops an equilibrium search model with endogenous job destructions and where firms decide at the time of job entry how much to invest in match-specific human capital. We first show that job destruction and training investment decisions are strongly complementary. It is possible...... that there are no firings at equilibrium. Further, training investments are confronted to a hold-up problem making the decentralized equilibrium always inefficient. We show therefore that both training subsidies and firing taxes must be implemented to bring back efficiency....

  15. Hold-up monitoring system for plutonium process tanks

    International Nuclear Information System (INIS)

    Zhu Rongbao; Jin Huimin; Tan Yajun

    1994-01-01

    The development of hold-up monitoring system for plutonium process tanks and a calculation method for α activities deposited in containers and inner walls of pipe are described. The hardware of monitoring system consists of a portable HPGe detector, a φ50 mm x 60 mm NaI(Tl) detector, γ-ray tungsten collimators, ORTEC92X Spectrum Master and an AST-286 computer. The software of system includes Maestro Tm for Window3 and a PHOUP1 hold-up application software for user. The Monte-Carlo simulation calculation supported by MCNP software is performed for the probability calculation of all the unscattering γ-rays reaching to the detection positions from the source terms deposited in the complicated tanks. A measurement mean value for different positions is used to minimize the effect of heterogeneous distribution of source term. The sensitivity is better than 3.7 x 10 6 Bq/kg (steel) for a plutonium simulation source on a 3-8 mm thick steel plate surrounded by 0.8 x 10 -10 C/kg·s γ field from long-life fission products

  16. Investigation of Gas Holdup in a Vibrating Bubble Column

    Science.gov (United States)

    Mohagheghian, Shahrouz; Elbing, Brian

    2015-11-01

    Synthetic fuels are part of the solution to the world's energy crisis and climate change. Liquefaction of coal during the Fischer-Tropsch process in a bubble column reactor (BCR) is a key step in production of synthetic fuel. It is known from the 1960's that vibration improves mass transfer in bubble column. The current study experimentally investigates the effect that vibration frequency and amplitude has on gas holdup and bubble size distribution within a bubble column. Air (disperse phase) was injected into water (continuous phase) through a needle shape injector near the bottom of the column, which was open to atmospheric pressure. The air volumetric flow rate was measured with a variable area flow meter. Vibrations were generated with a custom-made shaker table, which oscillated the entire column with independently specified amplitude and frequency (0-30 Hz). Geometric dependencies can be investigated with four cast acrylic columns with aspect ratios ranging from 4.36 to 24, and injector needle internal diameters between 0.32 and 1.59 mm. The gas holdup within the column was measured with a flow visualization system, and a PIV system was used to measure phase velocities. Preliminary results for the non-vibrating and vibrating cases will be presented.

  17. Experimental program for development and evaluation of nondestructive assay techniques for plutonium holdup

    International Nuclear Information System (INIS)

    Brumbach, S.B.

    1977-05-01

    An outline is presented for an experimental program to develop and evaluate nondestructive assay techniques applicable to holdup measurement in plutonium-containing fuel fabrication facilities. The current state-of-the-art in holdup measurements is reviewed. Various aspects of the fuel fabrication process and the fabrication facility are considered for their potential impact on holdup measurements. The measurement techniques considered are those using gamma-ray counting, neutron counting, and temperature measurement. The advantages and disadvantages of each technique are discussed. Potential difficulties in applying the techniques to holdup measurement are identified. Experiments are proposed to determine the effects of such problems as variation in sample thickness, in sample distribution, and in background radiation. These experiments are also directed toward identification of techniques most appropriate to various applications. Also proposed are experiments to quantify the uncertainties expected for each measurement

  18. U-235 Holdup Measurements in the 321-M Lathe HEPA Banks

    International Nuclear Information System (INIS)

    Salaymeh, S.R.

    2002-01-01

    The Analytical Development Section of Savannah River Technology Center (SRTC) was requested by the Facilities Decommissioning Division (FDD) to determine the holdup of enriched uranium in the 321-M facility as part of an overall deactivation project of the facility. The results of the holdup assays are essential for determining compliance with the Waste Acceptance Criteria, Material Control and Accountability, and to meet criticality safety controls. This report covers holdup measurements of uranium residue in six high efficiency particulate air (HEPA) filter banks of the A-lathe and B-lathe exhaust systems of the 321-M facility. This report discusses the non-destructive assay measurements, assumptions, calculations, and results of the uranium holdup in these six items

  19. Study on the dynamic holdup distribution of the pulsed extraction column

    International Nuclear Information System (INIS)

    Wang, S.; Chen, J.; Wu, Q.

    2013-01-01

    In the study, a CSTR cascade dynamic hydraulic model was developed to investigate the dynamic holdup distribution of the pulsed extraction column. It is assumed that the dynamic process of the dispersed phase holdup of pulsed extraction column has equal effects with the operational process of multiple cascade CSTRs. The process is consistent with the following assumptions: the holdups vary on different stages but maintain uniform on each stage; the changes of the hydraulic parameters have impact initially on the inlet of dispersed phase, and stability will be reached gradually through stage-by-stage blending. The model was tested and verified utilizing time domain response curves of the average holdup. Nearly 150 experiments were carried out with different capillary columns, various feed liquids, and diverse continuous phases and under different operation conditions. The regression curves developed by the model show a good consistency with the experimental results. After linking parameters of the model with operational conditions, the study further found that the parameters are only linearly correlated with pulse conditions and have nothing to do with flow rate for a specific pulsed extraction column. The accuracy of the model is measured by the average holdup, and the absolute error is ±0.01. The model can provide supports for the boundary studies on hydraulics and mass transfer by making simple and reliable prediction of the dynamic holdup distribution, with relatively less accessible hydraulic experimental data. (authors)

  20. Study on the dynamic holdup distribution of the pulsed extraction column

    Energy Technology Data Exchange (ETDEWEB)

    Wang, S.; Chen, J.; Wu, Q. [Tsinghua University, Beijing 100084 (China)

    2013-07-01

    In the study, a CSTR cascade dynamic hydraulic model was developed to investigate the dynamic holdup distribution of the pulsed extraction column. It is assumed that the dynamic process of the dispersed phase holdup of pulsed extraction column has equal effects with the operational process of multiple cascade CSTRs. The process is consistent with the following assumptions: the holdups vary on different stages but maintain uniform on each stage; the changes of the hydraulic parameters have impact initially on the inlet of dispersed phase, and stability will be reached gradually through stage-by-stage blending. The model was tested and verified utilizing time domain response curves of the average holdup. Nearly 150 experiments were carried out with different capillary columns, various feed liquids, and diverse continuous phases and under different operation conditions. The regression curves developed by the model show a good consistency with the experimental results. After linking parameters of the model with operational conditions, the study further found that the parameters are only linearly correlated with pulse conditions and have nothing to do with flow rate for a specific pulsed extraction column. The accuracy of the model is measured by the average holdup, and the absolute error is ±0.01. The model can provide supports for the boundary studies on hydraulics and mass transfer by making simple and reliable prediction of the dynamic holdup distribution, with relatively less accessible hydraulic experimental data. (authors)

  1. Performance Evaluation of Spectroscopic Detectors for LEU Hold-up Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Venkataraman, Ramkumar [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutter, Greg [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); McElroy, Robert Dennis [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-12-06

    The hold-up measurement of low-enriched uranium materials may require use of alternate detector types relative to the measurement of highly enriched uranium. This is in part due to the difference in process scale (i.e., the components are generally larger for low-enriched uranium systems), but also because the characteristic gamma-ray lines from 235U used for assay of highly enriched uranium will be present at a much reduced intensity (on a per gram of uranium basis) at lower enrichments. Researchers at Oak Ridge National Laboratory examined the performance of several standard detector types, e.g., NaI(Tl), LaBr3(Ce), and HPGe, to select a suitable candidate for measuring and quantifying low-enriched uranium hold-up in process pipes and equipment at the Portsmouth gaseous diffusion plant. Detector characteristics, such as energy resolution (full width at half maximum) and net peak count rates at gamma ray energies spanning a range of 60–1332 keV, were measured for the above-mentioned detector types using the same sources and in the same geometry. Uranium enrichment standards (Certified Reference Material no. 969 and Certified Reference Material no. 146) were measured using each of the detector candidates in the same geometry. The net count rates recorded by each detector at 186 keV and 1,001 keV were plotted as a function of enrichment (atom percentage). Background measurements were made in unshielded and shielded configurations under both ambient and elevated conditions of 238U activity. The highly enriched uranium hold-up measurement campaign at the Portsmouth plant was performed on process equipment that had been cleaned out. Therefore, in most cases, the thickness of the uranium deposits was less than the “infinite thickness” for the 186 keV gamma rays to be completely self-attenuated. Because of this, in addition to measuring the 186 keV gamma, the 1,001 keV gamma ray from 234mPa—a daughter of 238U in secular

  2. Achieving Higher Accuracy in the Gamma-Ray Spectrocopic Assay of Holdup

    International Nuclear Information System (INIS)

    Russo, P.A.; Wenz, T.R.; Smith, S.E.; Harris, J.F.

    2000-01-01

    Gamma-ray spectroscopy is an important technique for the measurement of quantities of nuclear material holdup in processing equipment. Because the equipment in large facilities dedicated to uranium isotopic enrichment, uranium/plutonium scrap recovery or various stages of fuel fabrication is extensive, the total holdup may be large by its distribution alone, even if deposit thicknesses are small. Good accountability practices require unbiased measurements with uncertainties that are as small as possible. This paper describes new procedures for use with traditional holdup analysis methods based on gamma-ray spectroscopy. The procedures address the two sources of bias inherent in traditional gamma-ray measurements of holdup. Holdup measurements are performed with collimated, shielded gamma-ray detectors. The measurement distance is chosen to simplify the deposit geometry to that of a point, line or area. The quantitative holdup result is based on the net count rate of a representative gamma ray. This rate is corrected for contributions from room background and for attenuation by the process equipment. Traditional holdup measurements assume that the width of the point or line deposit is very small compared to the measurement distance, and that the self-attenuation effects can be neglected. Because each point or line deposit has a finite width and because self-attenuation affects all measurements, bias is incurred in both assumptions. In both cases the bias is negative, explaining the systematically low results of gamma-ray holdup measurements. The new procedures correct for bias that arises from both the finite-source effects and the gamma-ray self-attenuation. The procedures used to correct for both of these effects apply to the generalized geometries. One common empirical parameter is used for both corrections. It self-consistently limits the total error incurred (from uncertain knowledge of this parameter) in the combined correction process, so that it is

  3. Gas holdup in a reciprocating plate bioreactor: Non-Newtonian - liquid phase

    Directory of Open Access Journals (Sweden)

    Naseva Olivera S.

    2002-01-01

    Full Text Available The gas holdup was studied in non-newtonian liquids in a gas-liquid and gas-liquid-solid reciprocating plate bioreactor. Aqueous solutions of carboxy methyl cellulose (CMC; Lucel, Lučane, Yugoslavia of different degrees of polymerization (PP 200 and PP 1000 and concentration (0,5 and 1%, polypropylene spheres (diameter 8.3 mm; fraction of spheres: 3.8 and 6.6% by volume and air were used as the liquid, solid and gas phase. The gas holdup was found to be dependent on the vibration rate, the superficial gas velocity, volume fraction of solid particles and Theological properties of the liquid ohase. Both in the gas-liquid and gas-liquid-solid systems studied, the gas holdup increased with increasing vibration rate and gas flow rate. The gas holdup was higher in three-phase systems than in two-phase ones under otter operating conditions being the same. Generally the gas holdup increased with increasing the volume fraction of solid particles, due to the dispersion action of the solid particles, and decreased with increasing non-Newtonian behaviour (decreasing flow index i.e. with increasing degree of polymerization and solution concentration of CMC applied, as a result of gas bubble coalescence.

  4. Holdup Measurement System II (HMSII): Version 2.1. User's guide and software documentation

    International Nuclear Information System (INIS)

    Smith, S.E.

    1995-01-01

    The Holdup Measurement System II (HMSII) software is a database management package for doing Holdup Measurements. It is based on the generalized geometry holdup (GGH) methodology taught in the US Department of Energy Safeguards Technology Training Program, ''Nondestructive Assay of Special Nuclear Materials Holdup.'' This program was developed and taught by Los Alamos National Laboratory (LANL). The HMSII was developed as a joint effort between LANL and the Oak Ridge Y-12 Plant, managed for the US Department of Energy by Lockheed Martin Energy Systems, Inc. The system is designed specifically for use with three types of Multichannel Analyzer (MCA): a Davidson Portable MultiChannel Analyzer (MCA); a EG ampersand G Ortec MicroNOMAD (μNOMAD); or a new Miniature Modular MultiChannel Analyzer (M 3 CA) under development at LANL. It is also designed assuming a 512 channel spectrum from a low resolution (e.g., NaI) detector measuring Uranium or Plutonium. Another important hardware component of the system is a portable bar code reader (also called a DataLogger or Trakker), by Intermec Corporation. The 944X series and the JANUS 2OXX series readers are compatible models with the HMSII. The JANUS series is a bar code reader which is also a 386 compatible palmtop PC with MS-DOS 5.0 built-in. Both series readers are programmable and control all the aspects of field holdup data collection from the MCAs

  5. Holdup measurements of plutonium in glove box exhausts

    International Nuclear Information System (INIS)

    Glick, J.B.; Haas, F.X.; McKamy, J.N.; Garrett, A.G.

    1991-01-01

    A new measurement technique has been developed to quantify plutonium in process glove box exhausts. The technique implemented at Rocky Flats Plant utiltizes a shielded, collimated 0.5in. x 0.5in. bismuth germanate (BGO) gamma-ray detector. Pairs of measurements are made at one foot intervals along the duct. One measurement is made with the detector viewing the bottom of the duct with the detector crystal approximately 2 inches from the duct surface. The second measurement is made on the top of the exhaust pipe with the detector crystal 2 inches from the top of the duct. When the detector is placed in the bottom assay position, the area of the holdup material is assumed to extend beyond the detector field of view. The concentration of plutonium in g/cm 2 is obtained from this bottom measurement. The deposit width is determined from a model developed to relate the deposit width to the ratio of the count rates measured at the two positions, above and below the duct. Once a deposit width has been calculated, it is multiplied by the concentration determined from the bottom measurement to yield a mass- per-unit-length at the duct location. Total plutonium mass is then determined by multiplying the duct length by the average of the mass- per-unit length assays performed along the duct. The applicability of the technique is presented in a comparison of field measurement data to analysis results on material removed from the ducts. 3 refs., 3 figs., 1 tab

  6. Variation law of gas holdup in an autoclave during the pressure leaching process by using a mixed-flow agitator

    Science.gov (United States)

    Tian, Lei; Liu, Yan; Tang, Jun-jie; Lü, Guo-zhi; Zhang, Ting-an

    2017-08-01

    The multiphase reaction process of pressure leaching is mainly carried out in the liquid phase. Therefore, gas holdup is essential for the gas-liquid-solid phase reaction and the extraction rate of valuable metals. In this paper, a transparent quartz autoclave, a six blades disc turbine-type agitator, and a high-speed camera were used to investigate the gas holdup of the pressure leaching process. Furthermore, experiments determining the effects of agitation rate, temperature, and oxygen partial pressure on gas holdup were carried out. The results showed that when the agitation rate increased from 350 to 600 r/min, the gas holdup increased from 0.10% to 0.64%. When the temperature increased from 363 to 423 K, the gas holdup increased from 0.14% to 0.20%. When the oxygen partial pressure increased from 0.1 to 0.8 MPa, the gas holdup increased from 0.13% to 0.19%. A similar criteria relationship was established by Homogeneous Principle and Buckingham's theorem. Comprehensively, empirical equation of gas holdup was deduced on the basis of experimental data and the similarity theory, where the criterion equation was determined as ɛ = 4.54 × 10-11 n 3.65 T 2.08 P g 0.18. It can be seen from the formula that agitation rate made the most important impact on gas holdup in the pressure leaching process using the mixed-flow agitator.

  7. Imaging phase holdup distribution of three phase flow systems using dual source gamma ray tomography

    International Nuclear Information System (INIS)

    Varma, Rajneesh; Al-Dahhan, Muthanna; O'Sullivan, Joseph

    2008-01-01

    Full text: Multiphase reaction and process systems are used in abundance in the chemical and biochemical industry. Tomography has been successfully employed to visualize the hydrodynamics of multiphase systems. Most of the tomography methods (gamma ray, x-ray and electrical capacitance and resistance) have been successfully implemented for two phase dynamic systems. However, a significant number of chemical and biochemical systems consists of dynamic three phases. Research effort directed towards the development of tomography techniques to image such dynamic system has met with partial successes for specific systems with applicability to limited operating conditions. A dual source tomography scanner has been developed that uses the 661 keV and 1332 keV photo peaks from the 137 Cs and 60 Co for imaging three phase systems. A new approach has been developed and applied that uses the polyenergetic Alternating Minimization (A-M) algorithm, developed by O'Sullivan and Benac (2007), for imaging the holdup distribution in three phases' dynamic systems. The new approach avoids the traditional post image processing approach used to determine the holdup distribution where the attenuation images of the mixed flow obtained from gamma ray photons of two different energies are used to determine the holdup of three phases. In this approach the holdup images are directly reconstructed from the gamma ray transmission data. The dual source gamma ray tomography scanner and the algorithm were validated using a three phase phantom. Based in the validation, three phase holdup studies we carried out in slurry bubble column containing gas liquid and solid phases in a dynamic state using the dual energy gamma ray tomography. The key results of the holdup distribution studies in the slurry bubble column along with the validation of the dual source gamma ray tomography system would be presented and discussed

  8. Reprocessing of spent nuclear fuel, Annex 1: Experimental facility for testing and development of pulsed columns and auxiliary devices; Prerada isluzenog nuklearnog goriva, Prilog 1: Opitno postrojenje za ispitivanje i razvoj pulsnih kolona i pomocnih uredjaja

    Energy Technology Data Exchange (ETDEWEB)

    Gal, I [Institute of Nuclear Sciences Boris Kidric, Laboratorija za hemiju visoke aktivnosti, Vinca, Beograd (Serbia and Montenegro)

    1964-12-15

    After completing the design project for building the experimental facility for testing and development of pulsed columns for spent fuel reprocessing, the construction started by the end of 1963 and was completed in August 1964. The facility was built in Kjeller, Norway within cooperation project between out country and Norway. This report covers a brief description of the facility and the action plan of its implementation.

  9. Fissile material holdup monitoring in the PREPP [Process Experimental Pilot Plant] process

    International Nuclear Information System (INIS)

    Becker, G.K.; Pawelko, R.J.

    1989-01-01

    The Process Experimental Pilot Plant (PREPP) is an incineration system designed to thermally process mixed transuranic (TRU) waste and TRU contaminated low-level waste. The TRU isotopic composition is that of weapons grade plutonium (Pu) which necessitates that criticality prevention measures by incorporated into the plant design and operation. Criticality safety in the PREPP process is assured through the utilization of mass and moderation control in conjunction with favorable vessel geometries. The subject of this paper concerns the Pu mass holdup instrumentation system which is an integral part of the inprocess mass control strategy. Plant vessels and components requiring real-time mass holdup measurements were selected based on their evaluated potential for achieving physically credible Pu mass loadings and associated parameters which could lead to a criticality event. If the parameters requisite to a criticality occurrence could not physically be achieved under credible plant conditions, the particular location only required periodic portable holdup monitoring. Based on these analyses five real-time holdup monitoring locations were identified for criticality assurance purposes. An additional real-time instrument is part of the system but serves primarily in the capacity of providing operational support data. 1 fig

  10. Influence of pressure on the properties of chromatographic columns. II. The column hold-up volume.

    Science.gov (United States)

    Gritti, Fabrice; Martin, Michel; Guiochon, Georges

    2005-04-08

    The effect of the local pressure and of the average column pressure on the hold-up column volume was investigated between 1 and 400 bar, from a theoretical and an experimental point of view. Calculations based upon the elasticity of the solids involved (column wall and packing material) and the compressibility of the liquid phase show that the increase of the column hold-up volume with increasing pressure that is observed is correlated with (in order of decreasing importance): (1) the compressibility of the mobile phase (+1 to 5%); (2) in RPLC, the compressibility of the C18-bonded layer on the surface of the silica (+0.5 to 1%); and (3) the expansion of the column tube (columns packed with the pure Resolve silica (0% carbon), the derivatized Resolve-C18 (10% carbon) and the Symmetry-C18 (20% carbon) adsorbents, using water, methanol, or n-pentane as the mobile phase. These solvents have different compressibilities. However, 1% of the relative increase of the column hold-up volume that was observed when the pressure was raised is not accounted for by the compressibilities of either the solvent or the C18-bonded phase. It is due to the influence of the pressure on the retention behavior of thiourea, the compound used as tracer to measure the hold-up volume.

  11. Integration and the hold-up problem in the design organization for engineering projects

    NARCIS (Netherlands)

    Zerjav, Vedran; Hartmann, Timo; Javernick-Will, A.; Chinowsky, P.

    2012-01-01

    The paper presents a perspective of the design organization in engineering projects based on the economic concept of the hold-up problem. By integrating the economic theories on the boundaries of organizations into the existing knowledge on design in engineering projects, the paper hypothesizes a

  12. Influence of small amounts of additives on gas hold-up, bubble size, and interfacial area

    NARCIS (Netherlands)

    Cents, A. H. G.; Jansen, D. J. W.; Brilman, D. W. F.; Versteeg, G. F.

    2005-01-01

    The gas-liquid interfacial area, which is determined by the gas hold-up and the Sauter mean bubble diameter, determines the production rate in many industrial processes. The effect of additives on this interfacial area is, especially in multiphase systems (gas-liquid-solid, gas-liquid-liquid), often

  13. Concerns and evidence for ex-post hold-up with essential patents

    NARCIS (Netherlands)

    Bekkers, R.N.A.

    2015-01-01

    Patented technologies may add significant value to technical standards. But the owners of patents that are necessary required in order to implement a standard (“essential patents”) obtain a particularly powerful position. One of the widely recognized risks here is patent holdup, where the patent

  14. Portable gamma-ray holdup and attributes measurements of high- and variable-burnup plutonium

    International Nuclear Information System (INIS)

    Wenz, T.R.; Russo, P.A.; Miller, M.C.; Menlove, H.O.; Takahashi, S.; Yamamoto, Y.; Aoki, I.

    1991-01-01

    High burnup-plutonium holdup has been assayed quantitatively by low resolution gamma-ray spectrometry. The assay was calibrated with four plutonium standards representing a range of fuel burnup and 241 Am content. Selection of a calibration standard based on its qualitative spectral similarity to gamma-ray spectra of the process material is partially responsible for the success of these holdup measurements. The spectral analysis method is based on the determination of net counts in a single spectral region of interest (ROI). However, the low-resolution gamma-ray assay signal for the high-burnup plutonium includes unknown amounts of contamination from 241 Am. For most needs, the range of calibration standards required for this selection procedure is not available. A new low-resolution gamma-ray spectral analysis procedure for assay of 239 Pu has been developed. The procedure uses the calculated isotope activity ratios and the measured net counts in three spectral ROIs to evaluate and remove the 241 Am contamination from the 239 Pu assay signal on a spectrum-by-spectrum basis. The calibration for the new procedure requires only a single plutonium standard. The procedure also provides a measure of the burnup and age attributes of holdup deposits. The new procedure has been demonstrated using portable gamma-ray spectroscopy equipment for a wide range of plutonium standards and has also been applied to the assay of 239 Pu holdup in a mixed oxide fuel fabrication facility. 10 refs., 5 figs., 3 tabs

  15. Solid foam packings for multiphase reactors: Modelling of liquid holdup and mass transfer

    NARCIS (Netherlands)

    Stemmet, C.P.; Schaaf, van der J.; Kuster, B.F.M.; Schouten, J.C.

    2006-01-01

    In this paper, experimental and modeling results are presented of the liquid holdup and gas–liquid mass transfer characteristics of solid foam packings. Experiments were done in a semi-2D transparent bubble column with solid foam packings of aluminum in the range of 5–40 pores per inch (ppi). The

  16. The importance of holdup in contracting: Evidence from a field experiment

    NARCIS (Netherlands)

    Iyer, R.; Schoar, A.

    2008-01-01

    This paper explores how the relationship specificity of the investment affects the ex-ante structure of contracts and the ex-post resolution of an ensuing holdup problem. We set up a field experiment in the wholesale market for pens in India where we sent entrepreneurs as auditors to procure large

  17. Hydrodynamic flow regimes, gas holdup, and liquid circulation in airlift reactors

    Energy Technology Data Exchange (ETDEWEB)

    Abashar, M.E.; Narsingh, U.; Rouillard, A.E.; Judd, R. [Univ. of Durban (South Africa)

    1998-04-01

    This study reports an experimental investigation into the hydrodynamic behavior of an external-loop airlift reactor (ALR) for the air-water system. Three distinct flow regimes are identified--namely homogeneous, transition, and heterogeneous regimes. The transition between homogeneous and heterogeneous flow is observed to occur over a wide range rather than being merely a single point as has been previously reported in the literature. A gas holdup correlation is developed for each flow regime. The correlations fit the experimental gas holdup data with very good accuracy (within {+-}5%). It would appear, therefore, that a deterministic equation to describe each flow regime is likely to exist in ALRs. This equation is a function of the reactor geometry and the system`s physical properties. New data concerning the axial variation of gas holdup is reported in which a minimum value is observed. This phenomenon is discussed and an explanation offered. Discrimination between two sound theoretical models--namely model 1 (Chisti et al., 1988) and model 2 (Garcia Calvo, 1989)--shows that model 1 predicts satisfactorily the liquid circulation velocity with an error of less than {+-} 10%. The good predictive features of model 1 may be due to the fact that it allows for a significant energy dissipation by wakes behind bubbles. Model 1 is now further improved by the new gas holdup correlations which are derived for the three different flow regimes.

  18. Segmented gamma scanning method for measuring holdup in the spherical container

    International Nuclear Information System (INIS)

    Deng Jingshan; Li Ze; Gan Lin; Lu Wenguang; Dong Mingli

    2007-01-01

    Some special nuclear material (SNM) is inevitably deposited in the facilities (mixer, reactor) of nuclear material process line. Exactly knowing the quantity of nuclear material holdup is very important for nuclear material accountability and critical safety. This paper presents segmented gamma scanning method for SNM holdup measurement of spherical container, at the left, right and back of which other equipments exist so that the detectors can be put at the only front of container for measurement. The nuclear material deposited in the spherical container can be looked as spherical shell source, which is divided into many layers. The detectors scanning spherical shell source are moved layer by layer from the top to the bottom to obtain projection data, with which deposited material distribution can be reconstructed by using Least Square (LS) method or Maximum Likelihood (ML) method. With these methods accurate total holdup can be obtained by summing up all the segmental values reconstructed. In this paper this measurement method for holdup in the spherical container was verified with Monte-Carlo simulation calculation and experiment. (authors)

  19. HOLDUP MEASUREMENTS FOR VISUAL EXAMINATION GLOVEBOXES AT THE SAVANNAH RIVER SITE

    Energy Technology Data Exchange (ETDEWEB)

    Sigg, R

    2006-05-03

    Visual Examination (VE) gloveboxes are used at the Savannah River Site (SRS) to remediate transuranic waste (TRU) drums. Noncompliant items are removed before the drums undergo further characterization in preparation for shipment to the Waste Isolation Pilot Plant (WIPP). Maintaining the flow of drums through the remediation process is critical to the program's seven-days-per-week operation. Conservative assumptions are used to ensure that glovebox contamination from this continual operation is below acceptable limits. Holdup measurements are performed in order to confirm that these assumptions are conservative. High Cs-137 backgrounds in the VE glovebox areas preclude the use of a sodium iodide spectrometer, so a high-purity germanium (HPGe) detector, having superior resolution, is used. Plutonium-239 is usually the nuclide of interest; however, Pu-241, Np-237 (including its daughter Pa-233) and Pu-238 (if detected) are typically assayed. Cs-137 and Co-60 may also be detected but are not reported since they do not contribute to the Pu-239 Fissile Gram Equivalent or Pu-239 Equivalent Curies. HEPA filters, drums and waste boxes are also assayed by the same methodology. If--for example--the HEPA is contained in a stainless steel housing, attenuation corrections must be applied for both the filter and the housing. Dimensions, detector locations, materials and densities are provided as inputs to Ortec's ISOTOPIC software to estimate attenuation and geometry corrections for the measurement positions. This paper discusses the methodology, results and limitations of these measurements for different VE glovebox configurations.

  20. In-process hold-up as a measure of safeguards significance

    International Nuclear Information System (INIS)

    Hamlin, A.G.

    1983-01-01

    This paper examines the use of the in-process hold-up itself, as a measure of safeguards significance. It is argued that for any process plant it is possible to define design limits for in-process hold-up, outside which the plant will not operate, or will operate in a detectably abnormal manner. It follows, therefore, that if the in-process hold-up can be derived at frequent intervals by input/output analysis from the start of the campaign, the only diversion that can be made from it during that campaign is limited to the quantity necessary to move the apparent in-process hold-up from its normal operating condition to the upper limiting condition. It also follows that detection of this diversion is as positive for protracted diversion as for abrupt diversion. If that part of the in-process inventory that is only measurable by input/output analysis has an upper operating limit that differs from its normal operating limit by less than a significant safeguards quantity of the material in question, the IAEA's criteria for both quantity and timeliness can be met by a combination of input/output analysis to determine in-process hold-up during the campaign, together with a material balance over the campaign. The paper examines the possibility of applying this measure to process plants in general, discusses means of minimizing the in-process inventory that must be determined by input/output analysis, and the performance required of the input and output analysis. It concludes that with current precision of measurement and with one input and one output batch per day, each measured, the method would be satisfactory for a campaign lasting nearly a year and involving 6 tonnes of plutonium. The paper examines the considerable advantages in verification that would arise from limiting safeguards analyses to the two points of input and output. (author)

  1. Gamma-ray imaging and holdup assays of 235-F PuFF cells 1 & 2

    Energy Technology Data Exchange (ETDEWEB)

    Aucott, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-12-20

    Savannah River National Laboratory (SRNL) Nuclear Measurements (L4120) was tasked with performing enhanced characterization of the holdup in the PuFF shielded cells. Assays were performed in accordance with L16.1-ADS-2460 using two high-resolution gamma-ray detectors. The first detector, an In Situ Object Counting System (ISOCS)-characterized detector, was used in conjunction with the ISOCS Geometry Composer software to quantify grams of holdup. The second detector, a Germanium Gamma-ray Imager (GeGI), was used to visualize the location and relative intensity of the holdup in the cells. Carts and collimators were specially designed to perform optimum assays of the cells. Thick, pencil-beam tungsten collimators were fabricated to allow for extremely precise targeting of items of interest inside the cells. Carts were designed with a wide range of motion to position and align the detectors. A total of 24 measurements were made, each typically 24 hours or longer to provide sufficient statistical precision. This report presents the results of the enhanced characterization for cells 1 and 2. The measured gram values agree very well with results from the 2014 study. In addition, images were created using both the 2014 data and the new GeGI data. The GeGI images of the cells walls reveal significant Pu-238 holdup on the surface of the walls in cells 1 and 2. Additionally, holdup is visible in the two pass-throughs from cell 1 to the wing cabinets. This report documents the final element (exterior measurements coupled with gamma-ray imaging and modeling) of the enhanced characterization of cells 1-5 (East Cell Line).

  2. Prediction of gas hold-up for alcohol solutions in a draft-tube bubble column

    Directory of Open Access Journals (Sweden)

    Albijanić Boris V.

    2006-01-01

    Full Text Available This paper deals with the prediction of the overall gas hold-up (εg, in the diluted solutions of C -C alcohols in draft - tube bubble column, by applying several newly proposed correlations and some of the well-known equations. Experiments were carried out in a column, consisting of two coaxial glass tubes, with a single sparger. Gas phase was air, while the liquid phases were aqueous solutions of alcohols, in concentrations of 0.5% w/w and 1% w/w. Overall gas hold-up was determined by applying volume expansion technique. The following order for εg values was observed: water < methanol < ethanol < n-propanol < n-butanol. Concentration of the applied alcohol appeared to be less significant than the .sort of alcohol itself. The best newly proposed correlation enables predicting of our experimental data with the average square deviation of empirical formula: s2=0.58 E-04.

  3. Reducing appropriable quasi rents to combat the hold-up problem in contract agriculture

    DEFF Research Database (Denmark)

    Karantininis, Kostas; Graversen, Jesper Tranbjerg

    2005-01-01

    , when building the broiler house, to prepare a viable future change in production from broilers to pigs. This will increase the bargaining power to farmers and may result in higher producer prices. Based on the general model of the hold-up problem with co-specific investments (Koss & Eaton, 1997......This paper examines the problem of hold-up in the agri-food sector. Production contracts offered to future Danish Broiler producers give rise to specific investments and appropriable quasi rents. Farmers are vulnerable to opportunistic behaviour by poultry processors which materializes in lower...... prices and results in lower investments. One obvious alternative to producing broilers is to produce pigs; since the pork industry is very well developed with the pork slaughter and processing being mainly (more than 90 %) run by two producer-owned cooperatives. Then it might be possible for the farmer...

  4. Plutonium Finishing Plant (PFP) Generalized Geometry Holdup Calculations and Total Measurement Uncertainty

    International Nuclear Information System (INIS)

    Keele, B.D.

    2005-01-01

    A collimated portable gamma-ray detector will be used to quantify the plutonium content of items that can be approximated as a point, line, or area geometry with respect to the detector. These items can include ducts, piping, glove boxes, isolated equipment inside of gloveboxes, and HEPA filters. The Generalized Geometry Holdup (GGH) model is used for the reduction of counting data. This document specifies the calculations to reduce counting data into contained plutonium and the associated total measurement uncertainty.

  5. Comparison of steam-generator liquid holdup and core uncovery in two facilities of differing scale

    International Nuclear Information System (INIS)

    Motley, F.; Schultz, R.

    1987-01-01

    This paper reports on Run SB-CL-05, a test similar to Semiscale Run S-UT-8. The test results show that the core was uncovered briefly during the accident and that the rods overheated at certain core locations. Liquid holdup on the upflow side of the steam-generator tubes was observed. After the loop seal cleared, the core refilled and the rods cooled. These behaviors were similar to those observed in the Semiscale run. The Large-Scale Test Facility (LSTF) Run SB-CL-06 is a counterpart test to Semiscale Run S-LH-01. The comparison of the results of both tests shows similar phenomena. The similarity of phenomena in these two facilities build confidence that these results can be expected to occur in a PWR. Similar holdup has now been observed in the 6 tubes of Semiscale and in the 141 tubes of LSTF. It is now more believable that holdup may occur in a full-scale steam generator with 3000 or more tubes. These results confirm the scaling of these phenomena from Semiscale (1/1705) to LSTF (1/48). The TRAC results for SB-CL-05 are in reasonable agreement with the test data. TRAC predicted the core uncovery and resulting rod heatup. The liquid holdup on the upflow side of the steam-generator tubes was also correctly predicted. The clearing of the loop seal allowed core recovery and cooled the overheated rods just as it had in the data. The TRAC analysis results of Run SB-CL-05 are similar to those from Semiscale Run S-UT-8. The ability of the TRAC code to calculate the phenomena equally well in the two experiments of different scales confirms the scalability of the many models in the code that are important in calculating this small break

  6. Measurement of liquid holdup and axial dispersion in trickle bed reactors using radiotracer technique

    International Nuclear Information System (INIS)

    Pant, H.J.; Saroha, A.K.; Nikam, K.D.P.

    2000-01-01

    The holdup and axial dispersion of aqueous phase has been measured in trickle bed reactors as a function of liquid and gas flow rates using radioisotope tracer technique. Experiments were carried out in the glass column of inner diameter of 15.2x10 -2 m column for air-water system using three different types of packings i.e. non-porous glass beads, porous catalyst of tablet and extrudate shape. The range of liquid and gas flow rates used were 8.3x10 -5 - 3.3x1- -4 m 3 /s and 0 - 6.67x10 -4 m 3 /s, respectively. Residence time distributions of liquid phase and gas phase were measured and mean residence times were determined. The values of liquid holdup were calculated from the measured mean residence times. It was observed that the liquid holdup increases with increase in liquid flow rates and was independent of increase in gas flow rates used in the study. Two-parameter axial dispersion model was used to simulate measured residence time distribution data and values of mean residence time and Peclet number were obtained. It was observed that the values of Peclet number increases with increase in liquid flow rate for glass beads and tablets and remains almost constant for extrudates. The values of mean residence time obtained from model simulation were found to be in good agreement with the values measured experimentally. (author)

  7. Metal droplet holdup in the thick slag layer subjected to bottom gas injection; Gas sokofuki wo tomonau atsui slag sonai ni okeru metal teki no holdup

    Energy Technology Data Exchange (ETDEWEB)

    Takashima, S; Iguchi, M [Hokkaido University, Sapporo (Japan)

    2000-04-01

    Model experiments were carried out to investigate the bubble and liquid flow characteristics in a bottom blowing bath covered with a thick slag layer typical of in-bath smelting reduction processes. An aqueous ZnCl{sub 2} solution and silicone oil were used as the models for molten metal and molten slag, respectively. The density ratio of the solution to the silicone oil was 1.7, being close to a steel/slag density ratio of 2.0 to 2.2 in practice. The diameter of a vessel containing the two liquids was changed over a wide range. The holdup of the solution carried up by bubbles into the upper silicone oil layer was measured with a suction tube. The volume of the solution, V{sub m}, was dependent mainly on the density difference. Empirical correlations of V{sub m} and the penetration height of the solution were derived. (author)

  8. Plutonium calorimetry and SNM holdup measurements. Progress report for the period March 1976--August 1976

    International Nuclear Information System (INIS)

    Brumbach, S.B.; Finkbeiner, A.M.; Lewis, R.N.; Perry, R.B.

    1977-02-01

    The calorimetric instrumentation developed at Argonne National Laboratory (ANL) for making nondestructive measurements of the plutonium content of fuel rods is discussed. Measurements with these instruments are relatively fast (i.e., 15 to 20 minutes) when compared to the several hours usually required with more conventional calorimeters and for this reason are called ''fast-response.'' Most of the discussion concerns the One-Meter and the Four-Meter Fuel-Rod Calorimeters and the Analytical Small-Sample Calorimeter. However, to provide some background and continuity where needed, a small amount of discussion is devoted to the three earlier calorimeters which have been described previously in the literature. Then a brief review is presented of the literature on plutonium holdup measurements. The use of gamma-ray techniques for holdup measurements is discussed and results are given for the determination of sample thickness using the ratio of intensities of high- and low-energy gamma rays. The measurements cover the plutonium metal thickness range from 0.001 to 0.120 inches. The design of a gamma-ray collimator with 37 parallel holes is also discussed. Neutron-counting experiments using BF 3 proportional counters embedded in two polyethylene slabs are described. This detector configuration is characterized for its sensitivity to sample and background plutonium, counting both coincidence (fission) and total neutrons. In addition, the use of infrared imaging devices to measure small temperature differences is considered for locating large amounts of plutonium holdup and also for performing fast attribute checks for fabricated fuel elements

  9. Comparison Study on Empirical Correlation for Mass Transfer Coefficient with Gas Hold-up and Input Power of Aeration Process

    International Nuclear Information System (INIS)

    Park, Sang Kyoo; Yang, Hei Cheon

    2017-01-01

    As stricter environmental regulation have led to an increase in the water treatment cost, it is necessary to quantitatively study the input power of the aeration process to improve the energy efficiency of the water treatment processes. The objective of this study is to propose the empirical correlations for the mass transfer coefficient with the gas hold-up and input power in order to investigate the mass transfer characteristics of the aeration process. It was found that as the input power increases, the mass transfer coefficient increases because of the decrease of gas hold-up and increase of Reynolds number, the penetration length, and dispersion of mixed flow. The correlations for the volumetric mass transfer coefficients with gas hold-up and input power were consistent with the experimental data, with the maximum deviation less than approximately ±10.0%.

  10. Comparison Study on Empirical Correlation for Mass Transfer Coefficient with Gas Hold-up and Input Power of Aeration Process

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sang Kyoo; Yang, Hei Cheon [Chonnam Nat’l Univ., Gwangju (Korea, Republic of)

    2017-06-15

    As stricter environmental regulation have led to an increase in the water treatment cost, it is necessary to quantitatively study the input power of the aeration process to improve the energy efficiency of the water treatment processes. The objective of this study is to propose the empirical correlations for the mass transfer coefficient with the gas hold-up and input power in order to investigate the mass transfer characteristics of the aeration process. It was found that as the input power increases, the mass transfer coefficient increases because of the decrease of gas hold-up and increase of Reynolds number, the penetration length, and dispersion of mixed flow. The correlations for the volumetric mass transfer coefficients with gas hold-up and input power were consistent with the experimental data, with the maximum deviation less than approximately ±10.0%.

  11. Gamma-Ray Hold-up Measurements and Results for Casa 2 and Casa 3 at TA-18

    International Nuclear Information System (INIS)

    Desimone, David J.; Vo, Duc Ta

    2016-01-01

    Numerous critical assembly experiments were performed at TA-18 beginning in the 1940's. Several buildings, Casa 2 and Casa 3, were constructed to house these experiments. All gamma-ray hold-up measurements and analysis were performed for Casa 2 and Casa 3 in November/December 2015 to support decommissioning and demolition of the facilities. A technique called room hold-up was used to measure the nuclear materials. A grid pattern was laid out on the large room floor approximately every 9-10 feet. A three- to five-minute measurement was taken with a high-purity germanium (HPGe) detector at each location. Also, several measurements were taken in two storage vaults of Casa 3. A calibration check of the detectors showed that the efficiency and energy scale were stable. The final results of the hold-up measurements for Casa 2 and 3 are given.

  12. Step voltage with periodic hold-up etching: A novel porous silicon formation

    International Nuclear Information System (INIS)

    Naddaf, M.; Awad, F.; Soukeih, M.

    2007-01-01

    A novel etching method for preparing light-emitting porous silicon (PS) is developed. A gradient steps (staircase) voltage is applied and hold-up for different periods of time between p-type silicon wafers and a graphite electrode in HF based solutions periodically. The single applied staircase voltage (0-30 V) is ramped in equal steps of 0.5 V for 6 s, and hold at 30 V for 30 s at a current of 6 mA. The current during hold-up time (0 V) was less than 10 μA. The room temperature photoluminescence (PL) behavior of the PS samples as a function of etching parameters has been investigated. The intensity of PL peak is initially increased and blue shifted on increasing etching time, but decreased after prolonged time. These are correlated with the study of changes in surface morphology using atomic force microscope (AFM), porosity and electrical conductance measurements. The time of holding-up the applied voltage during the formation process is found to highly affect the PS properties. On increasing the holding-up time, the intensity of PL peak is increased and blue shifted. The contribution of holding-up the applied steps during the formation process of PS is seen to be more or less similar to the post chemical etching process. It is demonstrated that this method can yield a porous silicon layer with stronger photoluminescence intensity and blue shifted than the porous silicon layer prepared by DC etching

  13. Step voltage with periodic hold-up etching: A novel porous silicon formation

    Energy Technology Data Exchange (ETDEWEB)

    Naddaf, M. [Department of Physics, Atomic Energy Commission of Syria (AECS), Damascus P.O. Box 6091 (Syrian Arab Republic)]. E-mail: scientific@aec.org.sy; Awad, F. [Department of Physics, Atomic Energy Commission of Syria (AECS), Damascus P.O. Box 6091 (Syrian Arab Republic); Soukeih, M. [Department of Physics, Atomic Energy Commission of Syria (AECS), Damascus P.O. Box 6091 (Syrian Arab Republic)

    2007-05-16

    A novel etching method for preparing light-emitting porous silicon (PS) is developed. A gradient steps (staircase) voltage is applied and hold-up for different periods of time between p-type silicon wafers and a graphite electrode in HF based solutions periodically. The single applied staircase voltage (0-30 V) is ramped in equal steps of 0.5 V for 6 s, and hold at 30 V for 30 s at a current of 6 mA. The current during hold-up time (0 V) was less than 10 {mu}A. The room temperature photoluminescence (PL) behavior of the PS samples as a function of etching parameters has been investigated. The intensity of PL peak is initially increased and blue shifted on increasing etching time, but decreased after prolonged time. These are correlated with the study of changes in surface morphology using atomic force microscope (AFM), porosity and electrical conductance measurements. The time of holding-up the applied voltage during the formation process is found to highly affect the PS properties. On increasing the holding-up time, the intensity of PL peak is increased and blue shifted. The contribution of holding-up the applied steps during the formation process of PS is seen to be more or less similar to the post chemical etching process. It is demonstrated that this method can yield a porous silicon layer with stronger photoluminescence intensity and blue shifted than the porous silicon layer prepared by DC etching.

  14. International Workshop on Best Practices in Material Hold-Up Monitoring

    International Nuclear Information System (INIS)

    Pickett, Chris A; Coates, Cameron W.

    2008-01-01

    In the fall of 2006, the Oak Ridge National Laboratory (ORNL) hosted an INMM-sponsored International Workshop on Best Practices in Material Hold-Up Monitoring. This workshop represented the first time in over 20 years that the international community had gathered to discuss pertinent hold-up topics and needs. More than one hundred people attended the workshop. Their expertise in the field ranged from novice to expert, and they shared their experiences and expertise throughout the week of the workshop. Presenters discussed techniques that have been used worldwide to detect and characterize nuclear materials held up in processes and equipment and the policies used to report quantities detected. The primary goal of the workshop was to compile information on the best practices and lessons learned and to make this information available for sharing throughout the international community. This paper discusses the information that was produced from four separate working groups (each composed of workshop attendees). Each group was tasked to determine what it felt to be the best practices in the field today and what issues needed to be addressed to move the field forward in the 21st century

  15. Influence of liquid holdup in steam generator U-tubes on small break LOCA severity

    International Nuclear Information System (INIS)

    Leonard, M.T.; Perryman, J.L.; Johnson, G.W.

    1983-01-01

    The severity of small cold leg break loss-of-coolant accidents has been shown to be influenced by liquid holdup in steam generator U-tubes during pump suction loop seal formation in two experiments performed in the Semiscale Mod-2A facility. The core coolant level can be depressed lower than previously thought possible due to a positive hydrostatic head across the steam generators caused by delayed drainage of liquid from the upflow side of the U-tubes. The significance of a lower core coolant level depression is the potential for a more severe temperature excursion occurring during the coolant boiloff phase subsequent to loop seal clearing and prior to accumulator injection. Presented in this paper are the experimental data analysis and supporting computer code calculations that led to these conclusions

  16. Influence of inlet velocity of air and solid particle feed rate on holdup mass and heat transfer characteristics in cyclone heat exchanger

    International Nuclear Information System (INIS)

    Mothilal, T.; Pitchandi, K.

    2015-01-01

    Present work elaborates the effect of inlet velocity of air and solid particle feed rate on holdup mass and heat transfer characteristics in a cyclone heat exchanger. The RNG k-ε turbulence model was adopted for modeling high turbulence flow and Discrete phase model (DPM) to track solid particles in a cyclone heat exchanger by ANSYS FLUENT software. The effect of inlet air velocity (5 to 25 m/s) and inlet solid particle feed rate of (0.2 to 2.5 g/s) at different particle diameter (300 to 500 μm) on holdup mass and heat transfer rate in cyclone heat exchanger was studied at air inlet temperature of 473 K. Results show that holdup mass and heat transfer rate increase with increase in inlet air velocity and inlet solid particle feed rate. Influence of solid particle feed rate on holdup mass has more significance. Experimental setup was built for high efficiency cyclone. Good agreement was found between experimental and simulation pressure drop. Empirical correlation was derived for dimensionless holdup mass and Nusselt number based on CFD data by regression technique. Correlation predicts dimensional holdup mass with +5% to -8% errors of experimental data and Nusselt number with +9% to -3%

  17. Influence of inlet velocity of air and solid particle feed rate on holdup mass and heat transfer characteristics in cyclone heat exchanger

    Energy Technology Data Exchange (ETDEWEB)

    Mothilal, T. [T. J. S. Engineering College, Gummidipoond (India); Pitchandi, K. [Sri Venkateswara College of Engineering, Sriperumbudur (India)

    2015-10-15

    Present work elaborates the effect of inlet velocity of air and solid particle feed rate on holdup mass and heat transfer characteristics in a cyclone heat exchanger. The RNG k-ε turbulence model was adopted for modeling high turbulence flow and Discrete phase model (DPM) to track solid particles in a cyclone heat exchanger by ANSYS FLUENT software. The effect of inlet air velocity (5 to 25 m/s) and inlet solid particle feed rate of (0.2 to 2.5 g/s) at different particle diameter (300 to 500 μm) on holdup mass and heat transfer rate in cyclone heat exchanger was studied at air inlet temperature of 473 K. Results show that holdup mass and heat transfer rate increase with increase in inlet air velocity and inlet solid particle feed rate. Influence of solid particle feed rate on holdup mass has more significance. Experimental setup was built for high efficiency cyclone. Good agreement was found between experimental and simulation pressure drop. Empirical correlation was derived for dimensionless holdup mass and Nusselt number based on CFD data by regression technique. Correlation predicts dimensional holdup mass with +5% to -8% errors of experimental data and Nusselt number with +9% to -3%.

  18. Standard test method for nondestructive assay of special nuclear material holdup using Gamma-Ray spectroscopic methods

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2007-01-01

    1.1 This test method describes gamma-ray methods used to nondestructively measure the quantity of 235U, or 239Pu remaining as holdup in nuclear facilities. Holdup occurs in all facilities where nuclear material is processed, in process equipment, in exhaust ventilation systems and in building walls and floors. 1.2 This test method includes information useful for management, planning, selection of equipment, consideration of interferences, measurement program definition, and the utilization of resources (1, 2, 3, 4). 1.3 The measurement of nuclear material hold up in process equipment requires a scientific knowledge of radiation sources and detectors, transmission of radiation, calibration, facility operations and error analysis. It is subject to the constraints of the facility, management, budget, and schedule; plus health and safety requirements; as well as the laws of physics. The measurement process includes defining measurement uncertainties and is sensitive to the form and distribution of the material...

  19. EFFECTS OF ALTERNATE ANTIFOAM AGENTS, NOBLE METALS, MIXING SYSTEMS AND MASS TRANSFER ON GAS HOLDUP AND RELEASE FROM NONNEWTONIAN SLURRIES

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero, H; Mark Fowley, M; Charles Crawford, C; Michael Restivo, M; Robert Leishear, R

    2007-12-24

    Gas holdup tests performed in a small-scale mechanically-agitated mixing system at the Savannah River National Laboratory (SRNL) were reported in 2006. The tests were for a simulant of waste from the Hanford Tank 241-AZ-101 and featured additions of DOW Corning Q2-3183A Antifoam agent. Results indicated that this antifoam agent (AFA) increased gas holdup in the waste simulant by about a factor of four and, counter intuitively, that the holdup increased as the simulant shear strength decreased (apparent viscosity decreased). These results raised questions about how the AFA might affect gas holdup in Hanford Waste Treatment and Immobilization Plant (WTP) vessels mixed by air sparging and pulse-jet mixers (PJMs). And whether the WTP air supply system being designed would have the capacity to handle a demand for increased airflow to operate the sparger-PJM mixing systems should the AFA increase retention of the radiochemically generated flammable gases in the waste by making the gas bubbles smaller and less mobile, or decrease the size of sparger bubbles making them mix less effectively for a given airflow rate. A new testing program was developed to assess the potential effects of adding the DOW Corning Q2-3183A AFA to WTP waste streams by first confirming the results of the work reported in 2006 by Stewart et al. and then determining if the AFA in fact causes such increased gas holdup in a prototypic sparger-PJM mixing system, or if the increased holdup is just a feature of the small-scale agitation system. Other elements of the new program include evaluating effects other variables could have on gas holdup in systems with AFA additions such as catalysis from trace noble metals in the waste, determining mass transfer coefficients for the AZ-101 waste simulant, and determining whether other AFA compositions such as Dow Corning 1520-US could also increase gas holdup in Hanford waste. This new testing program was split into two investigations, prototypic sparger

  20. Conceptual design of passive containment cooling system with air holdup tanks of improved APR+

    International Nuclear Information System (INIS)

    Jeon, Byong Guk; Cheon No, Hee

    2014-01-01

    In Korea, after the successful validation of passive auxiliary feedwater system (PAFS), a passive containment cooling system (PCCS) gets attention for future development. We suggested PCCS design based on APR+, an advanced PWR developed in Korea, and performed scoping analysis. On the extension of the simple scoping analysis, MARS simulation is performed to incorporate the behavior of water pool outside the containment as well as steam-air mixture inside the containment. Through the simulation we demonstrated the effectiveness of the air holdup tank (AHT). Also we investigated the effect of the models of heat transfer coefficients between steam-air mixture side and water side, and flow instability inside HX tubes. The presence of AHT enables us to reduce the number of required HX tubes more than half through an increase in the heat transfer coefficients due to the reduction of air fraction in the containment. Finally flow instability was observed and mitigated by putting orifice plates at the inlet of tubes, increasing height of return nozzle, and increasing a tube angle. (authors)

  1. Stationary and protable instruments for assay of HEU [highly enriched uranium] solids holdup

    International Nuclear Information System (INIS)

    Russo, P.A.; Sprinkle, J.K. Jr.; Stephens, M.M.; Brumfield, T.L.; Gunn, C.S.; Watson, D.R.

    1987-01-01

    Two NaI(Tl)-based instruments, one stationary and one portable, designed for automated assay of highly enriched uranium (HEU) solids holdup, are being evaluated at the scrap recovery facility of the Oak Ridge Y-12 Plant. The stationary instrument, a continuous monitor of HEU within the filters of the chip burner exhaust system, measures the HEU deposits that accumulate erratically and rapidly during chip burner operation. The portable system was built to assay HEU in over 100 m of elevated piping used to transfer UO 3 , UO 2 , and UF 4 powder to, from, and between the fluid bed conversion furnances and the powder storage hoods. Both instruments use two detector heads. Both provide immediate automatic readout of accumulated HEU mass. The 186-keV 235 U gamma ray is the assay signature, and the 60-keV gamma ray from an 241 Am source attached to each detector is used to normalize the 186-keV rate. The measurement geometries were selected for compatibility with simple calibration models. The assay calibrations were calculated from these models and were verified and normalized with measurements of HEU standards built to match geometries of uniform accumulations on the surfaces of the process equipment. This instrumentation effort demonstrates that simple calibration models can often be applied to unique measurement geometries, minimizing the otherwise unreasonable requirements for calibration standards and allowing extension of the measurements to other process locations

  2. Determination and evaluation of gas holdup time with the quadratic equation model and comparison with nonlinear equation models for isothermal gas chromatography

    Science.gov (United States)

    Wu, Liejun; Chen, Maoxue; Chen, Yongli; Li, Qing X.

    2013-01-01

    Gas holdup time (tM) is a basic parameter in isothermal gas chromatography (GC). Determination and evaluation of tM and retention behaviors of n-alkanes under isothermal GC conditions have been extensively studied since the 1950s, but still remains unresolved. The difference equation (DE) model [J. Chromatogr. A 1260:215–223] reveals retention behaviors of n-alkanes excluding tM, while the quadratic equation (QE) model [J. Chromatogr. A 1260:224–231] including tM is suitable for applications. In the present study, tM values were calculated with the QE model, which is referred to as tMT, evaluated and compared with other three typical nonlinear models. The QE model gives an accurate estimation of tM in isothermal GC. The tMT values are highly accurate, stable, and easy to calculate and use. There is only one tMT value at each GC condition. The proper classification of tM values can clarify their disagreement and facilitate GC retention data standardization for which tMT values are promising reference tM values. PMID:23726077

  3. Impact of Different H/D Ratio on Axial Gas Holdup Measured by Four-Tips Optical Fiber Probe in Slurry Bubble Column

    Directory of Open Access Journals (Sweden)

    Yasser Imad Abdulaziz

    2016-02-01

    Full Text Available In wide range of chemical, petrochemical and energy processes, it is not possible to manage without slurry bubble column reactors. In this investigation, time average local gas holdup was recorded for three different height to diameter (H/D ratios 3, 4 and 5 in 18" diameter slurry bubble column. Air-water-glass beads system was used with superficial velocity up to 0.24 m/s. the gas holdup was measured using 4-tips optical fiber probe technique. The results show that the axial gas holdup increases almost linearly with the superficial gas velocity in 0.08 m/s and levels off with a further increase of velocity. A comparison of the present data with those reported for other slurry bubble column having diameters larger than 18" and H/D higher than 5 indicated that there is little effect of diameter on gas holdup. Also, local section-average gas holdups increase with increasing superficial gas velocity, while the effect of solid loading are less significant than that of superficial gas velocity.

  4. Axial dispersion, holdup and slip velocity of dispersed phase in a pulsed sieve plate extraction column by radiotracer residence time distribution analysis.

    Science.gov (United States)

    Din, Ghiyas Ud; Chughtai, Imran Rafiq; Inayat, Mansoor Hameed; Khan, Iqbal Hussain

    2008-12-01

    Axial dispersion, holdup and slip velocity of dispersed phase have been investigated for a range of dispersed and continuous phase superficial velocities in a pulsed sieve plate extraction column using radiotracer residence time distribution (RTD) analysis. Axial dispersion model (ADM) was used to simulate the hydrodynamics of the system. It has been observed that increase in dispersed phase superficial velocity results in a decrease in its axial dispersion and increase in its slip velocity while its holdup increases till a maximum asymptotic value is achieved. An increase in superficial velocity of continuous phase increases the axial dispersion and holdup of dispersed phase until a maximum value is obtained, while slip velocity of dispersed phase is found to decrease in the beginning and then it increases with increase in superficial velocity of continuous phase.

  5. Control of nuclear material hold-up: The key factors for design and operation of MOX fuel fabrication plants in Europe

    International Nuclear Information System (INIS)

    Beaman, M.; Beckers, J.; Boella, M.

    2001-01-01

    Full text: Some protagonists of the nuclear industry suggest that MOX fuel fabrication plants are awash with nuclear materials which cannot be adequately safeguarded and that materials 'stuck in the plant' could conceal clandestine diversion of plutonium. In Europe the real situation is quite different: nuclear operators have gone to considerable efforts to deploy effective systems for safety, security, quality and nuclear materials control and accountancy which provide detailed information. The safeguards authorities use this information as part of the safeguards measures enabling them to give safeguards assurances for MOX fuel fabrication plants. This paper focuses on the issue of hold-up: definition of the hold-up and of the so-called 'hidden inventory'; measures implemented by the plant operators, from design to day to day operations, for minimising hold-up and 'hidden inventory'; plant operators' actions to manage the hold-up during production activities but also at PIT/PIV time; monitoring and management of the 'hidden inventory'; measures implemented by the safeguards authorities and inspectorate for verification and control of both hold-up and 'hidden inventory'. The examples of the different plant specific experiences related in this paper reveal the extensive experience gained in european MOX fuel fabrication plants by the plant operators and the safeguards authorities for the minimising and the control of both hold-up and 'hidden inventory'. MOX fuel has been fabricated in Europe, with an actual combined capacity of 2501. HM/year subject, without any discrimination, to EURATOM Safeguards, for more than 30 years and the total output is, to date, some 1000 t.HM. (author)

  6. Design of Helical Capacitance Sensor for Holdup Measurement in Two-Phase Stratified Flow: A Sinusoidal Function Approach

    Science.gov (United States)

    Lim, Lam Ghai; Pao, William K. S.; Hamid, Nor Hisham; Tang, Tong Boon

    2016-01-01

    A 360° twisted helical capacitance sensor was developed for holdup measurement in horizontal two-phase stratified flow. Instead of suppressing nonlinear response, the sensor was optimized in such a way that a ‘sine-like’ function was displayed on top of the linear function. This concept of design had been implemented and verified in both software and hardware. A good agreement was achieved between the finite element model of proposed design and the approximation model (pure sinusoidal function), with a maximum difference of ±1.2%. In addition, the design parameters of the sensor were analysed and investigated. It was found that the error in symmetry of the sinusoidal function could be minimized by adjusting the pitch of helix. The experiments of air-water and oil-water stratified flows were carried out and validated the sinusoidal relationship with a maximum difference of ±1.2% and ±1.3% for the range of water holdup from 0.15 to 0.85. The proposed design concept therefore may pose a promising alternative for the optimization of capacitance sensor design. PMID:27384567

  7. Design of Helical Capacitance Sensor for Holdup Measurement in Two-Phase Stratified Flow: A Sinusoidal Function Approach

    Directory of Open Access Journals (Sweden)

    Lam Ghai Lim

    2016-07-01

    Full Text Available A 360° twisted helical capacitance sensor was developed for holdup measurement in horizontal two-phase stratified flow. Instead of suppressing nonlinear response, the sensor was optimized in such a way that a ‘sine-like’ function was displayed on top of the linear function. This concept of design had been implemented and verified in both software and hardware. A good agreement was achieved between the finite element model of proposed design and the approximation model (pure sinusoidal function, with a maximum difference of ±1.2%. In addition, the design parameters of the sensor were analysed and investigated. It was found that the error in symmetry of the sinusoidal function could be minimized by adjusting the pitch of helix. The experiments of air-water and oil-water stratified flows were carried out and validated the sinusoidal relationship with a maximum difference of ±1.2% and ±1.3% for the range of water holdup from 0.15 to 0.85. The proposed design concept therefore may pose a promising alternative for the optimization of capacitance sensor design.

  8. Design of Helical Capacitance Sensor for Holdup Measurement in Two-Phase Stratified Flow: A Sinusoidal Function Approach.

    Science.gov (United States)

    Lim, Lam Ghai; Pao, William K S; Hamid, Nor Hisham; Tang, Tong Boon

    2016-07-04

    A 360° twisted helical capacitance sensor was developed for holdup measurement in horizontal two-phase stratified flow. Instead of suppressing nonlinear response, the sensor was optimized in such a way that a 'sine-like' function was displayed on top of the linear function. This concept of design had been implemented and verified in both software and hardware. A good agreement was achieved between the finite element model of proposed design and the approximation model (pure sinusoidal function), with a maximum difference of ±1.2%. In addition, the design parameters of the sensor were analysed and investigated. It was found that the error in symmetry of the sinusoidal function could be minimized by adjusting the pitch of helix. The experiments of air-water and oil-water stratified flows were carried out and validated the sinusoidal relationship with a maximum difference of ±1.2% and ±1.3% for the range of water holdup from 0.15 to 0.85. The proposed design concept therefore may pose a promising alternative for the optimization of capacitance sensor design.

  9. HEU Measurements of Holdup and Recovered Residue in the Deactivation and Decommissioning Activities of the 321-M Reactor Fuel Fabrication Facility at the Savannah River Site

    Energy Technology Data Exchange (ETDEWEB)

    DEWBERRY, RAYMOND; SALAYMEH, SALEEM R.; CASELLA, VITO R.; MOORE, FRANK S.

    2005-03-11

    This paper contains a summary of the holdup and material control and accountability (MC&A) assays conducted for the determination of highly enriched uranium (HEU) in the deactivation and decommissioning (D&D) of Building 321-M at the Savannah River Site (SRS). The 321-M facility was the Reactor Fuel Fabrication Facility at SRS and was used to fabricate HEU fuel assemblies, lithium-aluminum target tubes, neptunium assemblies, and miscellaneous components for the SRS production reactors. The facility operated for more than 35 years. During this time thousands of uranium-aluminum-alloy (U-Al) production reactor fuel tubes were produced. After the facility ceased operations in 1995, all of the easily accessible U-Al was removed from the building, and only residual amounts remained. The bulk of this residue was located in the equipment that generated and handled small U-Al particles and in the exhaust systems for this equipment (e.g., Chip compactor, casting furnaces, log saw, lathes A & B, cyclone separator, Freon{trademark} cart, riser crusher, ...etc). The D&D project is likely to represent an important example for D&D activities across SRS and across the Department of Energy weapons complex. The Savannah River National Laboratory was tasked to conduct holdup assays to quantify the amount of HEU on all components removed from the facility prior to placing in solid waste containers. The U-235 holdup in any single component of process equipment must not exceed 50 g in order to meet the container limit. This limit was imposed to meet criticality requirements of the low level solid waste storage vaults. Thus the holdup measurements were used as guidance to determine if further decontamination of equipment was needed to ensure that the quantity of U-235 did not exceed the 50 g limit and to ensure that the waste met the Waste Acceptance Criteria (WAC) of the solid waste storage vaults. Since HEU is an accountable nuclear material, the holdup assays and assays of recovered

  10. Influence of the type of organisms on the biomass hold-up in a fluidized-bed reactor

    Energy Technology Data Exchange (ETDEWEB)

    Timmermans, P.; Haute, A. van

    1984-01-01

    In the last few years, the use of fluidized-bed reactors for biological wastewater treatment has got increasing attention. In 1981, Shieh et al. proposed a model to predict the biomass concentration in a fluidized-bed reactor. From this model one can see that the biofilm density plays a very important role in determining the total biomass hold-up. In this article the influence of the type of carbon source on the biomass concentration, and as a consequence the type of organisms selected, is studied. The growth of a filamentous, budforming bacteria in a reactor treating nitrate rich surface water supplied with methanol as carbon source, results in a biomass concentration only half of the concentration which can normally be obtained in a fluidized-bed reactor treating synthetic wastewater; in this latter case rod-shaped bacteria are enriched which permit a dense packing.

  11. The pricing of trees: A study of hold-ups, holdouts, buy-outs and sell-offs

    Directory of Open Access Journals (Sweden)

    WD Reekie

    2004-11-01

    Full Text Available This paper draws on transactions cost analysis, price and auction theory, and competition authority findings in order to answer some questions on the structure and trading patterns of the South African forestry industry. Does a forestry firm linked contractually to supply an adjacent sawmill customer, form part of a bilateral monopoly?  For competition policy what are the relevant markets each party sells into or buys from?  Can either firm opportunistically hold-up the other in price revisions?  Or, where contracts have no effective terminal date, can one party hold out against offers of contract buyout?  If one party is a state agency are there rights of eminent domain?  If the state agency is due to be privatised can the method of sale, for example a simultaneous ascending auction, resolve some of the dilemmas?

  12. In-Situ Measurements of Low Enrichment Uranium Holdup Process Gas Piping at K-25 - Paper for Waste Management Symposia 2010 East Tennessee Technology Park Oak Ridge, Tennessee

    International Nuclear Information System (INIS)

    Rasmussen, B.

    2010-01-01

    This document is the final version of a paper submitted to the Waste Management Symposia, Phoenix, 2010, abstract BJC/OR-3280. The primary document from which this paper was condensed is In-Situ Measurement of Low Enrichment Uranium Holdup in Process Gas Piping at K-25 Using NaI/HMS4 Gamma Detection Systems, BJC/OR-3355. This work explores the sufficiency and limitations of the Holdup Measurement System 4 (HJVIS4) software algorithms applied to measurements of low enriched uranium holdup in gaseous diffusion process gas piping. HMS4 has been used extensively during the decommissioning and demolition project of the K-25 building for U-235 holdup quantification. The HMS4 software is an integral part of one of the primary nondestructive assay (NDA) systems which was successfully tested and qualified for holdup deposit quantification in the process gas piping of the K-25 building. The initial qualification focused on the measurement of highly enriched UO 2 F 2 deposits. The purpose of this work was to determine if that qualification could be extended to include the quantification of holdup in UO 2 F 2 deposits of lower enrichment. Sample field data are presented to provide evidence in support of the theoretical foundation. The HMS4 algorithms were investigated in detail and found to sufficiently compensate for UO 2 F 2 source self-attenuation effects, over the range of expected enrichment (4-40%), in the North and East Wings of the K-25 building. The limitations of the HMS4 algorithms were explored for a described set of conditions with respect to area source measurements of low enriched UO 2 F 2 deposits when used in conjunction with a 1 inch by 1/2 inch sodium iodide (NaI) scintillation detector. The theoretical limitations of HMS4, based on the expected conditions in the process gas system of the K-25 building, are related back to the required data quality objectives (DQO) for the NBA measurement system established for the K-25 demolition project. The combined

  13. PREDICTION OF GAS HOLD-UP IN A COMBINED LOOP AIR LIFT FLUIDIZED BED REACTOR USING NEWTONIAN AND NON-NEWTONIAN LIQUIDS

    Directory of Open Access Journals (Sweden)

    Sivakumar Venkatachalam

    2011-09-01

    Full Text Available Many experiments have been conducted to study the hydrodynamic characteristics of column reactors and loop reactors. In this present work, a novel combined loop airlift fluidized bed reactor was developed to study the effect of superficial gas and liquid velocities, particle diameter, fluid properties on gas holdup by using Newtonian and non-Newtonian liquids. Compressed air was used as gas phase. Water, 5% n-butanol, various concentrations of glycerol (60 and 80% were used as Newtonian liquids, and different concentrations of carboxy methyl cellulose aqueous solutions (0.25, 0.6 and 1.0% were used as non-Newtonian liquids. Different sizes of spheres, Bearl saddles and Raschig rings were used as solid phases. From the experimental results, it was found that the increase in superficial gas velocity increases the gas holdup, but it decreases with increase in superficial liquid velocity and viscosity of liquids. Based on the experimental results a correlation was developed to predict the gas hold-up for Newtonian and non-Newtonian liquids for a wide range of operating conditions at a homogeneous flow regime where the superficial gas velocity is approximately less than 5 cm/s

  14. Enhancement of oxygen mass transfer and gas holdup using palm oil in stirred tank bioreactors with xanthan solutions as simulated viscous fermentation broths.

    Science.gov (United States)

    Mohd Sauid, Suhaila; Krishnan, Jagannathan; Huey Ling, Tan; Veluri, Murthy V P S

    2013-01-01

    Volumetric mass transfer coefficient (kLa) is an important parameter in bioreactors handling viscous fermentations such as xanthan gum production, as it affects the reactor performance and productivity. Published literatures showed that adding an organic phase such as hydrocarbons or vegetable oil could increase the kLa. The present study opted for palm oil as the organic phase as it is plentiful in Malaysia. Experiments were carried out to study the effect of viscosity, gas holdup, and kLa on the xanthan solution with different palm oil fractions by varying the agitation rate and aeration rate in a 5 L bench-top bioreactor fitted with twin Rushton turbines. Results showed that 10% (v/v) of palm oil raised the kLa of xanthan solution by 1.5 to 3 folds with the highest kLa value of 84.44 h(-1). It was also found that palm oil increased the gas holdup and viscosity of the xanthan solution. The kLa values obtained as a function of power input, superficial gas velocity, and palm oil fraction were validated by two different empirical equations. Similarly, the gas holdup obtained as a function of power input and superficial gas velocity was validated by another empirical equation. All correlations were found to fit well with higher determination coefficients.

  15. Enhancement of Oxygen Mass Transfer and Gas Holdup Using Palm Oil in Stirred Tank Bioreactors with Xanthan Solutions as Simulated Viscous Fermentation Broths

    Directory of Open Access Journals (Sweden)

    Suhaila Mohd Sauid

    2013-01-01

    Full Text Available Volumetric mass transfer coefficient (kLa is an important parameter in bioreactors handling viscous fermentations such as xanthan gum production, as it affects the reactor performance and productivity. Published literatures showed that adding an organic phase such as hydrocarbons or vegetable oil could increase the kLa. The present study opted for palm oil as the organic phase as it is plentiful in Malaysia. Experiments were carried out to study the effect of viscosity, gas holdup, and kLa on the xanthan solution with different palm oil fractions by varying the agitation rate and aeration rate in a 5 L bench-top bioreactor fitted with twin Rushton turbines. Results showed that 10% (v/v of palm oil raised the kLa of xanthan solution by 1.5 to 3 folds with the highest kLa value of 84.44 h−1. It was also found that palm oil increased the gas holdup and viscosity of the xanthan solution. The kLa values obtained as a function of power input, superficial gas velocity, and palm oil fraction were validated by two different empirical equations. Similarly, the gas holdup obtained as a function of power input and superficial gas velocity was validated by another empirical equation. All correlations were found to fit well with higher determination coefficients.

  16. Inventory estimation for nuclear fuel reprocessing systems

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.

    1987-01-01

    The accuracy of nuclear material accounting methods for nuclear fuel reprocessing facilities is limited by nuclear material inventory variations in the solvent extraction contactors, which affect the separation and purification of uranium and plutonium. Since in-line methods for measuring contactor inventory are not available, simple inventory estimation models are being developed for mixer-settler contactors operating at steady state with a view toward improving the accuracy of nuclear material accounting methods for reprocessing facilities. The authors investigated the following items: (1) improvements in the utility of the inventory estimation models, (2) extension of improvements to inventory estimation for transient nonsteady-state conditions during, for example, process upset or throughput variations, and (3) development of simple inventory estimation models for reprocessing systems using pulsed columns

  17. Gas hold-up and oxygen mass transfer in three pneumatic bioreactors operating with sugarcane bagasse suspensions.

    Science.gov (United States)

    Esperança, M N; Cunha, F M; Cerri, M O; Zangirolami, T C; Farinas, C S; Badino, A C

    2014-05-01

    Sugarcane bagasse is a low-cost and abundant by-product generated by the bioethanol industry, and is a potential substrate for cellulolytic enzyme production. The aim of this work was to evaluate the effects of air flow rate (QAIR), solids loading (%S), sugarcane bagasse type, and particle size on the gas hold-up (εG) and volumetric oxygen transfer coefficient (kLa) in three different pneumatic bioreactors, using response surface methodology. Concentric tube airlift (CTA), split-cylinder airlift (SCA), and bubble column (BC) bioreactor types were tested. QAIR and %S affected oxygen mass transfer positively and negatively, respectively, while sugarcane bagasse type and particle size (within the range studied) did not influence kLa. Using large particles of untreated sugarcane bagasse, the loop-type bioreactors (CTA and SCA) exhibited higher mass transfer, compared to the BC reactor. At higher %S, SCA presented a higher kLa value (0.0448 s−1) than CTA, and the best operational conditions in terms of oxygen mass transfer were achieved for %S 27.0 L min−1. These results demonstrated that pneumatic bioreactors can provide elevated oxygen transfer in the presence of vegetal biomass, making them an excellent option for use in three-phase systems for cellulolytic enzyme production by filamentous fungi.

  18. Determination of diffusion factors according to the distribution of coating components at isothermal hold-up

    International Nuclear Information System (INIS)

    Shatinskij, V.F.; Nesterenko, A.I.

    1980-01-01

    Calculation equations for estimate of coating metal diffusion coefficients are derived. The experimental checking of derived dependences is carried out. Studies have been made on flat samples of 2x10x15 mm dimensions made of armco iron with W,Mo,Cr, Ga, Ge coatings. The initial distribution of saturation elements determined experimentally, and approximated by functions, is presented. By the method of placing of values of initial distributions into the calculated dependences determined are the coefficients of diffusion for above elements in armco iron. For experimental ckecking of the problem made is a program of computer calculation for two-phase chromium coating on armco iron. Cr diffusion coefficients in α- and γ-phase are determined at 950 deg, which constitute 5.692x10 -10 cm 2 /s and 1.365x10 -10 cm 2 /s. respectively. The control tests have shown that the application of calculated diffusion coefficients permits to describe with high accuracy the redistribution of saturation element in matrix of the saturating metal

  19. Study of axial mixing, holdup and slip velocity of dispersed phase in a pulsed sieve plate extraction column using radiotracer technique.

    Science.gov (United States)

    Ghiyas Ud Din; Imran Rafiq Chughtai; Hameed Inayat, Mansoor; Hussain Khan, Iqbal

    2009-01-01

    Axial mixing, holdup and slip velocity of dispersed phase which are parameters of fundamental importance in the design and operation of liquid-liquid extraction pulsed sieve plate columns have been investigated. Experiments for residence time distribution (RTD) analysis have been carried out for a range of pulsation frequency and amplitude in a liquid-liquid extraction pulsed sieve plate column with water as dispersed and kerosene as continuous phase using radiotracer technique. The column was operated in emulsion region and (99m)Tc in the form of sodium pertechnetate eluted from a (99)Mo/(99m)Tc generator was used to trace the dispersed phase. Axial dispersed plug flow model with open-open boundary condition and two points measurement method was used to simulate the hydrodynamics of dispersed phase. It has been observed that the axial mixing and holdup of dispersed phase increases with increase in pulsation frequency and amplitude until a maximum value is achieved while slip velocity decreases with increase in pulsation frequency and amplitude until it approaches a minimum value. Short lived and low energy radiotracer (99m)Tc in the form of sodium pertechnetate was found to be a good water tracer to study the hydrodynamics of a liquid-liquid extraction pulsed sieve plate column operating with two immiscible liquids, water and kerosene. Axial dispersed plug flow model with open-open boundary condition was found to be a suitable model to describe the hydrodynamics of dispersed phase in the pulsed sieve plate extraction column.

  20. Study of axial mixing, holdup and slip velocity of dispersed phase in a pulsed sieve plate extraction column using radiotracer technique

    International Nuclear Information System (INIS)

    Ghiyas Ud Din; Imran Rafiq Chughtai; Mansoor Hameed Inayat; Iqbal Hussain Khan

    2009-01-01

    Axial mixing, holdup and slip velocity of dispersed phase which are parameters of fundamental importance in the design and operation of liquid-liquid extraction pulsed sieve plate columns have been investigated. Experiments for residence time distribution (RTD) analysis have been carried out for a range of pulsation frequency and amplitude in a liquid-liquid extraction pulsed sieve plate column with water as dispersed and kerosene as continuous phase using radiotracer technique. The column was operated in emulsion region and 99m Tc in the form of sodium pertechnetate eluted from a 99 Mo/ 99m Tc generator was used to trace the dispersed phase. Axial dispersed plug flow model with open-open boundary condition and two points measurement method was used to simulate the hydrodynamics of dispersed phase. It has been observed that the axial mixing and holdup of dispersed phase increases with increase in pulsation frequency and amplitude until a maximum value is achieved while slip velocity decreases with increase in pulsation frequency and amplitude until it approaches a minimum value. Short lived and low energy radiotracer 99m Tc in the form of sodium pertechnetate was found to be a good water tracer to study the hydrodynamics of a liquid-liquid extraction pulsed sieve plate column operating with two immiscible liquids, water and kerosene. Axial dispersed plug flow model with open-open boundary condition was found to be a suitable model to describe the hydrodynamics of dispersed phase in the pulsed sieve plate extraction column.

  1. Gross Mal distribution Identification and Effect of Inlet Distributor on the Phase Holdup in a Trickle Bed Reactor Using Gamma-Ray Densitometry (GRD)

    International Nuclear Information System (INIS)

    Mohd Fitri Abdul Rahman; Alexander, V.; Al-Dahhan, M.

    2016-01-01

    Local liquid and gas mal distribution and their holdups in a packed column are difficult to identify due to multiphase properties and other design factors. Good liquid and gas flow distribution important to determine to get high performance of Trickle Bed Reactor (TBR). Gross mal distribution indicates some faulty or bad flow distribution of liquid and gas. In this work, gross mal distribution of phases has been identified using Gamma Ray Densitometry (GRD) technique with three types of inlet distributors (single inlet towards the wall, single inlet at the center, and proper shower) by measuring line average diameter profile of phases (Liquid, Gas, and Solids) holdups. Gamma-ray densitometry is a non-invasive technique which can be implemented at the laboratory, pilot plant, and industrial scales reactors. Experiments were performed on 0.14 m diameter reactor made of Plexiglas filled with 0.003 m glass bead which acts as the solid. The superficial velocities for both gas and liquid were in the range 0.03 m/s to 0.27 m/s and 0.004 m/s to 0.014 m/s respectively. Proper shower distributor showed early liquid spreading than compared with other distributors. The effect of superficial gas velocity on liquid spread was seen to be non-significant, and liquid distribution is found to be almost uniform at the center region of the catalyst bed. (author)

  2. Technology of extraction by solvent in pulsed columns

    International Nuclear Information System (INIS)

    Ros, P.

    1992-01-01

    Since its creation, the CEA (Commissariat a l'energie atomique) has produced several separation processes for natural or enriched uranium treatment and the treatment of spent fuels coming from nuclear reactors. Among these technologies, extraction by solvent is broadly used for separation and purification of nuclear matters. This technology can be used for other applications as hydrometallurgy, chemistry, pharmaceutics, depollution, agro-industry

  3. Relacionamento interorganizacional e hold-up no setor automotivo: uma análise sob o enfoque da Economia dos Custos de Transação Inter organizational relationship and hold-up in the automotive sector: an analysis focused on the Economics of Transaction Costs

    Directory of Open Access Journals (Sweden)

    Elio Ferrato

    2006-03-01

    Full Text Available As mudanças institucionais ocorridas a partir da década de noventa forçaram os fornecedores de peças da indústria automotiva brasileira a ajustar seus processos para atender às exigências de um mercado globalizado. Surge, então, uma complexa relação interorganizacional entre os fornecedores e as montadoras. Este estudo analisa essa relação pela ótica da teoria da Economia dos Custos de Transação (ECT. Analisando-se uma montadora e doze de seus fornecedores, foram levantadas proposições sobre a percepção do risco de quebra contratual (hold-up. Os resultados demonstraram que o tempo de relação fornecedor/montadora, o detalhamento na elaboração do contrato e um menor monitoramento da montadora sobre o fornecedor implicam menor percepção de risco de hold-up, assim como ficou demonstrado que a estrutura de governança dominante nessa relação é o modelo híbrido, que permite redução de custos de transação. Outros estudos que aprofundem a relação interorganizacional sob a ótica da ECT com um campo amostral mais amplo e heterogêneo podem contribuir para o desenvolvimento e aplicação desta teoria.Institutional changes since the decade of 1990 have forced parts suppliers for the Brazilian automotive industry to adjust to demands of a globalized market resulting in a more complex inter organizational relationship between suppliers and automotive assemblers. An analysis was made of this relationship from the perspective of the Transaction Costs Economics (TCE theory. Based upon the relationship of an automotive assembly company with 12 suppliers, propositions were formulated about the perception of the risk of contract break-up (hold-up. Results demonstrate that the duration of this relationship, amount of detail specified in the contract preparation and only a modest monitoring of the supplier by the assembly company lessened the perception of hold-up risk. The structure of governance prevalent in this relationship was

  4. A review of the application of temporary and permanent coatings for the reduction of activity hold-up and increased ease of decontamination

    International Nuclear Information System (INIS)

    Turner, A.D.; Dalton, J.J.

    1984-02-01

    Several surface finishing and coating techniques used in nuclear and industrial applications have been identified and their potential for reducing the hold-up of activity on exposed surfaces in α-active facilities, and easing their subsequent decontamination have been evaluated. The permanent treatment processes considered include electro-polishing, plating and anodizing; shot peening and nitriding; glazing, enamelling and ceramic coating; paints, lacquers and plastic linings. As temporary, replaceable surface protection, strippable coatings, adhesive backed films and chemically removable paints have also been included. An experimental programme is being initiated as the result of this survey to examine the effectiveness of these surface treatments in helping to reduce PuO 2 contamination and maximise decontamination effectiveness. (author)

  5. Difference equation model for isothermal gas chromatography expresses retention behavior of homologues of n-alkanes excluding the influence of holdup time

    Science.gov (United States)

    Wu, Liejun; Chen, Yongli; Caccamise, Sarah A.L.; Li, Qing X.

    2012-01-01

    A difference equation (DE) model is developed using the methylene retention increment (Δtz) of n-alkanes to avoid the influence of gas holdup time (tM). The effects of the equation orders (1st–5th) on the accuracy of a curve fitting show that a linear equation (LE) is less satisfactory and it is not necessary to use a complicated cubic or higher order equation. The relationship between the logarithm of Δtz and the carbon number (z) of the n-alkanes under isothermal conditions closely follows the quadratic equation for C3–C30 n-alkanes at column temperatures of 24–260 °C. The first and second order forward differences of the expression (Δlog Δtz and Δ2log Δtz, respectively) are linear and constant, respectively, which validates the DE model. This DE model lays a necessary foundation for further developing a retention model to accurately describe the relationship between the adjusted retention time and z of n-alkanes. PMID:22939376

  6. Operating range, hold-up, droplet size and axial mixing of pulsed plate columns in highly disperse and low-continuity volume flows

    International Nuclear Information System (INIS)

    Schmidt, H.; Miller, H.

    Operating behavior, hold-up, droplet size and axial mixing are investigated in highly disperse and slightly continuous volume flows in a pulsed plate column. The geometry of the column of 4-m length and 10-cm inside diameter was held constant. The hole shape of the column bases was changed, wherby the cylindrical, sharp-edge drilled hole is compared with the punched, nozzle-shaped hole in their effects on the fluid-dynamic behavior. In this case we varied the volume flows, the ratio of volume flows, the pulse frequency and the operating temperature. The operation was held constant for the aqueous, the organic, the continuous and the disperse phases. The objective was to demonstrate the applicability of pulsed plate columns with very large differences between the organic disperse and the aqueous continuous volume flow, to obtain design data for such columns and to perform a scale-up to industrial reprocessing plant-size. 18 references, 11 figures, 3 tables

  7. Effects of alcohols on gas holdup and volumetric liquid-phase mass transfer coefficient in gel-particle-suspended bubble column

    Energy Technology Data Exchange (ETDEWEB)

    Salvacion, J.; Murayama, M.; Otaguchi, K.; Koide, K. [Tokyo Institute of Technology, Tokyo (Japan)

    1995-08-20

    The effects of alcohols, column dimensions, gas velocity, physical properties of liquids, and gel particles on the gas holdup e{sub G} and the volumetric liquid-phase mass transfer coefficient k{sub L}a in a gel-particle-suspended bubble column under liquid-solid batch operation were studied experimentally. It was shown that addition of at alcohols to water generally increases e{sub G}. However, k{sub L}a values in aqueous solutions of alcohols became larger or smaller than those in water, according to the kind and concentration of the alcohol added to water. It was also shown that the presence of suspended gel-particles in the bubble column reduces values of e{sub G} and k{sub L}a. Based on these observations, empirical equations for e{sub G} in the transition regime in an ethanol solution, for e{sub G} in the heterogeneous now regime applicable to various alcohol solutions and for k{sub L}a in both now regimes were proposed. 18 refs., 12 figs., 3 tabs.

  8. Preliminary findings of the effect of surface finish and coatings on PuO2 contamination hold-up and ease of decontamination in aqueous and non-aqueous media

    International Nuclear Information System (INIS)

    Dalton, J.T.; Chamberlain, H.E.; Turner, A.D.; Dawson, R.K.

    1984-11-01

    The application of temporary and permanent coatings for the reduction of α-activity hold-up and increased ease of decontamination has been reviewed and a variety of surface treatments and coatings identified as being worthy of investigation. A range of specimens have been prepared with hard coatings and smooth surfaces. A number of adhesive films, paints and lacquers have been applied to mild and stainless steel substrates. In order to compare the different surfaces, a standard contamination technique using a mechanical wiper has been developed to reproducibly contaminate the materials with PuO 2 . A standard decontamination test using water/Decon 75 or Arklone X is being used to compare the ease of decontamination. Preliminary experiments have shown that the smoothest surface finishes have the lowest activity hold-up and are more easily cleaned. Due to the superior level of micro-smoothness attainable on metals, these showed a significantly lower activity retention than the organic coatings examined to date. A comparison of the relative efficiency of cleaning in Decon 75 and Arklone X showed that generally speaking metal surfaces were cleaned equally well by both media, while the unaged organic surfaces were decontaminated more thoroughly in Arklone X, though the differences were somewhat marginal. (author)

  9. Pulse column hydrodynamics for liquid-liquid extraction with truncated-disc packing

    International Nuclear Information System (INIS)

    Bracou, H.; Hanssens, A.

    1993-01-01

    The experimental installation is composed of a stainless steel truncated disc packing with 25% of axial transparency, 25 mm of distance between the plates and 50 mm of diameter (DT 25/25/50 stainless steel). The phase system used during these tests is the hydrogenated tetra-propylene (HTP dispersed phase) /water (continuous phase) in continuous aqueous operating ''FAC'' method. The aim of this work is to gather the whole requisite data to the modelization stage based on the extrapolation of a mathematical tool, type population evaluation with a double object to supply the basis information useful for the model, and to establish an experimental data base for the model validation. 9 refs., 4 figs

  10. Unit rupture work as a criterion for quantitative estimation of hardenability in steel

    International Nuclear Information System (INIS)

    Kramarov, M.A.; Orlov, E.D.; Rybakov, A.B.

    1980-01-01

    Shown is possible utilization of high sensitivity of resistance to fracture of structural steel to the hardenability degree in the course of hardening to find the quantitative estimation of the latter one. Proposed is a criterion kappa, the ratio of the unit rupture work in the case of incomplete hardenability (asub(Tsub(ih))) under investigation, and the analoguc value obtained in the case of complete hardenability Asub(Tsub(Ch)) at the testing temperature corresponding to the critical temperature Tsub(100(M). Confirmed is high criterion sensitivity of the hardened steel structure on the basis of experimental investigation of the 40Kh, 38KhNM and 38KhNMFA steels after isothermal hold-up at different temperatures, corresponding to production of various products of austenite decomposition

  11. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  12. Estimation of radiation exposure associated with inert gas radionuclides discharged to the environment by the nuclear power industry

    International Nuclear Information System (INIS)

    Bryant, P.M.; Jones, J.A.

    1973-05-01

    Several fission product isotopes of krypton and xenon are formed during operation of nuclear power stations, while other radioactive inert gases, notably isotopes of argon and nitrogen, are produced as neutron activation products. With the exception of 85 Kr these radionuclides are short-lived, and the containment and hold-up arrangements in different reactor systems influence the composition of the inert gas mixtures discharged to the environment. Cooling of irradiated fuel before chemical reprocessing reduces very substantially the amounts of the short-lived krypton and xenon isotopes available for discharge at reprocessing plants, but almost all the 85 Kr formed in the fuel is currently discharged to atmosphere from these plants. Estimates are made of the radiation exposure of the public associated with these discharges to atmosphere taking into account the type of radiation emitted, radioactive half-life and the local, regional and world-wide populations concerned. Such estimates are often based on simple models in which activity is assumed to be distributed in a semi-infinite cloud. The model used in this assessment takes into account the finite cloud near the point of its discharge and its behaviour when dispersion in the atmosphere is affected by the presence of buildings. This is particularly important in the case of discharges from those reactors which do not have high stacks. The model also provides in detail for the continued world-wide circulation of the longer-lived 85 Kr. (author)

  13. Estimating Utility

    DEFF Research Database (Denmark)

    Arndt, Channing; Simler, Kenneth R.

    2010-01-01

    A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes a......, with the current approach tending to systematically overestimate (underestimate) poverty in urban (rural) zones.......A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes...... an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially...

  14. Advanced RESTART method for the estimation of the probability of failure of highly reliable hybrid dynamic systems

    International Nuclear Information System (INIS)

    Turati, Pietro; Pedroni, Nicola; Zio, Enrico

    2016-01-01

    The efficient estimation of system reliability characteristics is of paramount importance for many engineering applications. Real world system reliability modeling calls for the capability of treating systems that are: i) dynamic, ii) complex, iii) hybrid and iv) highly reliable. Advanced Monte Carlo (MC) methods offer a way to solve these types of problems, which are feasible according to the potentially high computational costs. In this paper, the REpetitive Simulation Trials After Reaching Thresholds (RESTART) method is employed, extending it to hybrid systems for the first time (to the authors’ knowledge). The estimation accuracy and precision of RESTART highly depend on the choice of the Importance Function (IF) indicating how close the system is to failure: in this respect, proper IFs are here originally proposed to improve the performance of RESTART for the analysis of hybrid systems. The resulting overall simulation approach is applied to estimate the probability of failure of the control system of a liquid hold-up tank and of a pump-valve subsystem subject to degradation induced by fatigue. The results are compared to those obtained by standard MC simulation and by RESTART with classical IFs available in the literature. The comparison shows the improvement in the performance obtained by our approach. - Highlights: • We consider the issue of estimating small failure probabilities in dynamic systems. • We employ the RESTART method to estimate the failure probabilities. • New Importance Functions (IFs) are introduced to increase the method performance. • We adopt two dynamic, hybrid, highly reliable systems as case studies. • A comparison with literature IFs proves the effectiveness of the new IFs.

  15. A radiotracer method for the dynamic measurement of the in-process inventory of dissolved materials

    International Nuclear Information System (INIS)

    Iqbal, M.N.; Gardner, R.P.; Verghese, K.

    1990-01-01

    A radioactive tracer method and the associated mathematical models have been developed to determine the in-process inventory of zirconium in a pulse column from the counting rate data taken at the exits of the aqueous and organic phases after an impulse injection of the radiotracer at the aqueous-phase inlet. Model parameters were obtained by applying a nonlinear least-squares fitting of the models to the experimental data. The total inventory of zirconium present in the pulse column under different operating conditions was determined by calculating the average residence time of zirconium in the system using these model parameters. The results indicate that the radiotracer technique provides a viable means of on-line determination of solute mass holdup. (author)

  16. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  17. The Self Attenuation Correction for Holdup Measurements, a Historical Perspective

    International Nuclear Information System (INIS)

    Oberer, R. B.; Gunn, C. A.; Chiang, L. G.

    2006-01-01

    Self attenuation has historically caused both conceptual as well as measurement problems. The purpose of this paper is to eliminate some of the historical confusion by reviewing the mathematical basis and by comparing several methods of correcting for self attenuation focusing on transmission as a central concept

  18. Bubble Column with Electrolytes: Gas Holdup and Flow Regimes

    Czech Academy of Sciences Publication Activity Database

    Orvalho, Sandra; Růžička, Marek; Drahoš, Jiří

    2009-01-01

    Roč. 48, č. 17 (2009), s. 8237-8243 ISSN 0888-5885 R&D Projects: GA ČR GA104/07/1110; GA ČR GP104/09/P255; GA AV ČR(CZ) IAA200720801; GA MŠk LA319 Institutional research plan: CEZ:AV0Z40720504 Keywords : bubble column * hydrodynamics * surfactants Subject RIV: CI - Industrial Chemistry, Chemical Engineering Impact factor: 1.758, year: 2009

  19. Uranium Holdup Survey Program (UHSP) Lean Improvement Project

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Jeff [Y-12 National Security Complex, Oak Ridge, TN (United States); Univ. of Tennessee, Knoxville, TN (United States)

    2017-10-13

    This report discusses the UHSP monitoring program, a radioactive material accounting process and its purpose. The systematic approach to implementing Lean principles, determining key requirements, root causes of variation and disruption that interfere with program efficiency and effectiveness. Preexisting issues within the UHSP are modeled to illustrate the impact that they have on the large and extensive systems.

  20. Holdup time measurement by radioactive tracers in pulp production

    International Nuclear Information System (INIS)

    Roetzer, H.; Donhoffer, D.

    1988-12-01

    A batch of pulp was to be labelled before passing two bleaching towers of a pulp plant. Activated glass fibres were used as a tracer, which contained 24-Na with a half-life of 15 hours. It was shown in laboratory tests, that the glass fibres were suitable for transport studies of wood pulp. For use in the tests the fibres were activated and suspended in water. Due to the small diameter of the fibres (2-5 micrometers) this suspension shows physical properties very similar to the pulp. For detection six scintillation probes were mounted at different positions outside the bleaching tower. Radiation protection during the test was very easy due to the low total activity of the tracer material. Residence time distributions for both towers were measured. The successful tracer experiments show, that the method of labelling is suited for investigations of material transport in the pulp and paper industry. 3 figs., 11 refs., 2 tabs. (Author)

  1. Real time material accountability in a chemical reprocessing unit

    International Nuclear Information System (INIS)

    Morrison, G.W.; Blakeman, E.D.

    1979-01-01

    Real time material accountability for a pulse column in a chemical reprocessing plant has been investigated using a simple two state Kalman Filter. Operation of the pulse column was simulated by the SEPHIS-MOD4 code. Noisy measurements of the column inventory were obtained from two neutron detectors with various simulated counting errors. Various loss scenarios were simulated and analyzed by the Kalman Filter. In all cases considered the Kalman Filter was a superior estimator of material loss

  2. Variable Kernel Density Estimation

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1992-01-01

    We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...

  3. Fuel Burn Estimation Model

    Science.gov (United States)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.

  4. Optimal fault signal estimation

    NARCIS (Netherlands)

    Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.

    2002-01-01

    We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By

  5. A neural flow estimator

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik

    1995-01-01

    This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...

  6. Adjusting estimative prediction limits

    OpenAIRE

    Masao Ueki; Kaoru Fueda

    2007-01-01

    This note presents a direct adjustment of the estimative prediction limit to reduce the coverage error from a target value to third-order accuracy. The adjustment is asymptotically equivalent to those of Barndorff-Nielsen & Cox (1994, 1996) and Vidoni (1998). It has a simpler form with a plug-in estimator of the coverage probability of the estimative limit at the target value. Copyright 2007, Oxford University Press.

  7. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  8. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  9. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  10. Cost function estimation

    DEFF Research Database (Denmark)

    Andersen, C K; Andersen, K; Kragh-Sørensen, P

    2000-01-01

    on these criteria, a two-part model was chosen. In this model, the probability of incurring any costs was estimated using a logistic regression, while the level of the costs was estimated in the second part of the model. The choice of model had a substantial impact on the predicted health care costs, e...

  11. Software cost estimation

    NARCIS (Netherlands)

    Heemstra, F.J.

    1992-01-01

    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be

  12. Software cost estimation

    NARCIS (Netherlands)

    Heemstra, F.J.; Heemstra, F.J.

    1993-01-01

    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be

  13. Coherence in quantum estimation

    Science.gov (United States)

    Giorda, Paolo; Allegra, Michele

    2018-01-01

    The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.

  14. Overconfidence in Interval Estimates

    Science.gov (United States)

    Soll, Jack B.; Klayman, Joshua

    2004-01-01

    Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…

  15. Adaptive Spectral Doppler Estimation

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2009-01-01

    . The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to pro- vide good spectral resolution and contrast even when the ob- servation window is very short. The 2 adaptive techniques are tested......In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...... and compared with the averaged periodogram (Welch’s method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set...

  16. Optomechanical parameter estimation

    International Nuclear Information System (INIS)

    Ang, Shan Zheng; Tsang, Mankei; Harris, Glen I; Bowen, Warwick P

    2013-01-01

    We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cramér–Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation–maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cramér–Rao bound most closely. Our analytic results are envisioned to be valuable to optomechanical experiment design, while the EM algorithm, with its ability to estimate most of the system parameters, is envisioned to be useful for optomechanical sensing, atomic magnetometry and fundamental tests of quantum mechanics. (paper)

  17. CHANNEL ESTIMATION TECHNIQUE

    DEFF Research Database (Denmark)

    2015-01-01

    A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the communicat......A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over...... the communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...... filter characteristics of at least one known transceiver filter arranged in the communication channel....

  18. Radiation risk estimation

    International Nuclear Information System (INIS)

    Schull, W.J.; Texas Univ., Houston, TX

    1992-01-01

    Estimation of the risk of cancer following exposure to ionizing radiation remains largely empirical, and models used to adduce risk incorporate few, if any, of the advances in molecular biology of a past decade or so. These facts compromise the estimation risk where the epidemiological data are weakest, namely, at low doses and dose rates. Without a better understanding of the molecular and cellular events ionizing radiation initiates or promotes, it seems unlikely that this situation will improve. Nor will the situation improve without further attention to the identification and quantitative estimation of the effects of those host and environmental factors that enhance or attenuate risk. (author)

  19. Estimation of Jump Tails

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Todorov, Victor

    We propose a new and flexible non-parametric framework for estimating the jump tails of Itô semimartingale processes. The approach is based on a relatively simple-to-implement set of estimating equations associated with the compensator for the jump measure, or its "intensity", that only utilizes...... the weak assumption of regular variation in the jump tails, along with in-fill asymptotic arguments for uniquely identifying the "large" jumps from the data. The estimation allows for very general dynamic dependencies in the jump tails, and does not restrict the continuous part of the process...... and the temporal variation in the stochastic volatility. On implementing the new estimation procedure with actual high-frequency data for the S&P 500 aggregate market portfolio, we find strong evidence for richer and more complex dynamic dependencies in the jump tails than hitherto entertained in the literature....

  20. Bridged Race Population Estimates

    Data.gov (United States)

    U.S. Department of Health & Human Services — Population estimates from "bridging" the 31 race categories used in Census 2000, as specified in the 1997 Office of Management and Budget (OMB) race and ethnicity...

  1. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  2. APLIKASI SPLINE ESTIMATOR TERBOBOT

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2001-01-01

    Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2,…,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2,…,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.

  3. Fractional cointegration rank estimation

    DEFF Research Database (Denmark)

    Lasak, Katarzyna; Velasco, Carlos

    the parameters of the model under the null hypothesis of the cointegration rank r = 1, 2, ..., p-1. This step provides consistent estimates of the cointegration degree, the cointegration vectors, the speed of adjustment to the equilibrium parameters and the common trends. In the second step we carry out a sup......-likelihood ratio test of no-cointegration on the estimated p - r common trends that are not cointegrated under the null. The cointegration degree is re-estimated in the second step to allow for new cointegration relationships with different memory. We augment the error correction model in the second step...... to control for stochastic trend estimation effects from the first step. The critical values of the tests proposed depend only on the number of common trends under the null, p - r, and on the interval of the cointegration degrees b allowed, but not on the true cointegration degree b0. Hence, no additional...

  4. Estimation of spectral kurtosis

    Science.gov (United States)

    Sutawanir

    2017-03-01

    Rolling bearings are the most important elements in rotating machinery. Bearing frequently fall out of service for various reasons: heavy loads, unsuitable lubrications, ineffective sealing. Bearing faults may cause a decrease in performance. Analysis of bearing vibration signals has attracted attention in the field of monitoring and fault diagnosis. Bearing vibration signals give rich information for early detection of bearing failures. Spectral kurtosis, SK, is a parameter in frequency domain indicating how the impulsiveness of a signal varies with frequency. Faults in rolling bearings give rise to a series of short impulse responses as the rolling elements strike faults, SK potentially useful for determining frequency bands dominated by bearing fault signals. SK can provide a measure of the distance of the analyzed bearings from a healthy one. SK provides additional information given by the power spectral density (psd). This paper aims to explore the estimation of spectral kurtosis using short time Fourier transform known as spectrogram. The estimation of SK is similar to the estimation of psd. The estimation falls in model-free estimation and plug-in estimator. Some numerical studies using simulations are discussed to support the methodology. Spectral kurtosis of some stationary signals are analytically obtained and used in simulation study. Kurtosis of time domain has been a popular tool for detecting non-normality. Spectral kurtosis is an extension of kurtosis in frequency domain. The relationship between time domain and frequency domain analysis is establish through power spectrum-autocovariance Fourier transform. Fourier transform is the main tool for estimation in frequency domain. The power spectral density is estimated through periodogram. In this paper, the short time Fourier transform of the spectral kurtosis is reviewed, a bearing fault (inner ring and outer ring) is simulated. The bearing response, power spectrum, and spectral kurtosis are plotted to

  5. Approximate Bayesian recursive estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2014-01-01

    Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf

  6. Ranking as parameter estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav; Guy, Tatiana Valentine

    2009-01-01

    Roč. 4, č. 2 (2009), s. 142-158 ISSN 1745-7645 R&D Projects: GA MŠk 2C06001; GA AV ČR 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : ranking * Bayesian estimation * negotiation * modelling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/AS/karny- ranking as parameter estimation.pdf

  7. Maximal combustion temperature estimation

    International Nuclear Information System (INIS)

    Golodova, E; Shchepakina, E

    2006-01-01

    This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models

  8. Single snapshot DOA estimation

    Science.gov (United States)

    Häcker, P.; Yang, B.

    2010-10-01

    In array signal processing, direction of arrival (DOA) estimation has been studied for decades. Many algorithms have been proposed and their performance has been studied thoroughly. Yet, most of these works are focused on the asymptotic case of a large number of snapshots. In automotive radar applications like driver assistance systems, however, only a small number of snapshots of the radar sensor array or, in the worst case, a single snapshot is available for DOA estimation. In this paper, we investigate and compare different DOA estimators with respect to their single snapshot performance. The main focus is on the estimation accuracy and the angular resolution in multi-target scenarios including difficult situations like correlated targets and large target power differences. We will show that some algorithms lose their ability to resolve targets or do not work properly at all. Other sophisticated algorithms do not show a superior performance as expected. It turns out that the deterministic maximum likelihood estimator is a good choice under these hard conditions.

  9. Thermodynamic estimation: Ionic materials

    International Nuclear Information System (INIS)

    Glasser, Leslie

    2013-01-01

    Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy

  10. Distribution load estimation - DLE

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A. [VTT Energy, Espoo (Finland)

    1996-12-31

    The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems

  11. Generalized estimating equations

    CERN Document Server

    Hardin, James W

    2002-01-01

    Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th

  12. Digital Quantum Estimation

    Science.gov (United States)

    Hassani, Majid; Macchiavello, Chiara; Maccone, Lorenzo

    2017-11-01

    Quantum metrology calculates the ultimate precision of all estimation strategies, measuring what is their root-mean-square error (RMSE) and their Fisher information. Here, instead, we ask how many bits of the parameter we can recover; namely, we derive an information-theoretic quantum metrology. In this setting, we redefine "Heisenberg bound" and "standard quantum limit" (the usual benchmarks in the quantum estimation theory) and show that the former can be attained only by sequential strategies or parallel strategies that employ entanglement among probes, whereas parallel-separable strategies are limited by the latter. We highlight the differences between this setting and the RMSE-based one.

  13. Distribution load estimation - DLE

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A [VTT Energy, Espoo (Finland)

    1997-12-31

    The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems

  14. Estimating Delays In ASIC's

    Science.gov (United States)

    Burke, Gary; Nesheiwat, Jeffrey; Su, Ling

    1994-01-01

    Verification is important aspect of process of designing application-specific integrated circuit (ASIC). Design must not only be functionally accurate, but must also maintain correct timing. IFA, Intelligent Front Annotation program, assists in verifying timing of ASIC early in design process. This program speeds design-and-verification cycle by estimating delays before layouts completed. Written in C language.

  15. Organizational flexibility estimation

    OpenAIRE

    Komarynets, Sofia

    2013-01-01

    By the help of parametric estimation the evaluation scale of organizational flexibility and its parameters was formed. Definite degrees of organizational flexibility and its parameters for the Lviv region enterprises were determined. Grouping of the enterprises under the existing scale was carried out. Special recommendations to correct the enterprises behaviour were given.

  16. On Functional Calculus Estimates

    NARCIS (Netherlands)

    Schwenninger, F.L.

    2015-01-01

    This thesis presents various results within the field of operator theory that are formulated in estimates for functional calculi. Functional calculus is the general concept of defining operators of the form $f(A)$, where f is a function and $A$ is an operator, typically on a Banach space. Norm

  17. Estimation of vector velocity

    DEFF Research Database (Denmark)

    2000-01-01

    Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...

  18. Quantifying IT estimation risks

    NARCIS (Netherlands)

    Kulk, G.P.; Peters, R.J.; Verhoef, C.

    2009-01-01

    A statistical method is proposed for quantifying the impact of factors that influence the quality of the estimation of costs for IT-enabled business projects. We call these factors risk drivers as they influence the risk of the misestimation of project costs. The method can effortlessly be

  19. Numerical Estimation in Preschoolers

    Science.gov (United States)

    Berteletti, Ilaria; Lucangeli, Daniela; Piazza, Manuela; Dehaene, Stanislas; Zorzi, Marco

    2010-01-01

    Children's sense of numbers before formal education is thought to rely on an approximate number system based on logarithmically compressed analog magnitudes that increases in resolution throughout childhood. School-age children performing a numerical estimation task have been shown to increasingly rely on a formally appropriate, linear…

  20. Estimating Gender Wage Gaps

    Science.gov (United States)

    McDonald, Judith A.; Thornton, Robert J.

    2011-01-01

    Course research projects that use easy-to-access real-world data and that generate findings with which undergraduate students can readily identify are hard to find. The authors describe a project that requires students to estimate the current female-male earnings gap for new college graduates. The project also enables students to see to what…

  1. Fast fundamental frequency estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2017-01-01

    Modelling signals as being periodic is common in many applications. Such periodic signals can be represented by a weighted sum of sinusoids with frequencies being an integer multiple of the fundamental frequency. Due to its widespread use, numerous methods have been proposed to estimate the funda...

  2. On Gnostical Estimates

    Czech Academy of Sciences Publication Activity Database

    Fabián, Zdeněk

    2017-01-01

    Roč. 56, č. 2 (2017), s. 125-132 ISSN 0973-1377 Institutional support: RVO:67985807 Keywords : gnostic theory * statistics * robust estimates Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability http://www.ceser.in/ceserp/index.php/ijamas/article/view/4707

  3. Estimation of morbidity effects

    International Nuclear Information System (INIS)

    Ostro, B.

    1994-01-01

    Many researchers have related exposure to ambient air pollution to respiratory morbidity. To be included in this review and analysis, however, several criteria had to be met. First, a careful study design and a methodology that generated quantitative dose-response estimates were required. Therefore, there was a focus on time-series regression analyses relating daily incidence of morbidity to air pollution in a single city or metropolitan area. Studies that used weekly or monthly average concentrations or that involved particulate measurements in poorly characterized metropolitan areas (e.g., one monitor representing a large region) were not included in this review. Second, studies that minimized confounding ad omitted variables were included. For example, research that compared two cities or regions and characterized them as 'high' and 'low' pollution area were not included because of potential confounding by other factors in the respective areas. Third, concern for the effects of seasonality and weather had to be demonstrated. This could be accomplished by either stratifying and analyzing the data by season, by examining the independent effects of temperature and humidity, and/or by correcting the model for possible autocorrelation. A fourth criterion for study inclusion was that the study had to include a reasonably complete analysis of the data. Such analysis would include an careful exploration of the primary hypothesis as well as possible examination of te robustness and sensitivity of the results to alternative functional forms, specifications, and influential data points. When studies reported the results of these alternative analyses, the quantitative estimates that were judged as most representative of the overall findings were those that were summarized in this paper. Finally, for inclusion in the review of particulate matter, the study had to provide a measure of particle concentration that could be converted into PM10, particulate matter below 10

  4. Histogram Estimators of Bivariate Densities

    National Research Council Canada - National Science Library

    Husemann, Joyce A

    1986-01-01

    One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...

  5. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  6. Distribution load estimation (DLE)

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A; Lehtonen, M [VTT Energy, Espoo (Finland)

    1998-08-01

    The load research has produced customer class load models to convert the customers` annual energy consumption to hourly load values. The reliability of load models applied from a nation-wide sample is limited in any specific network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to find improvements to the load models or, in general, improvements to the load estimates. In Distribution Load Estimation (DLE) the measurements from the network are utilized to improve the customer class load models. The results of DLE will be new load models that better correspond to the loading of the distribution network but are still close to the original load models obtained by load research. The principal data flow of DLE is presented

  7. Estimating ISABELLE shielding requirements

    International Nuclear Information System (INIS)

    Stevens, A.J.; Thorndike, A.M.

    1976-01-01

    Estimates were made of the shielding thicknesses required at various points around the ISABELLE ring. Both hadron and muon requirements are considered. Radiation levels at the outside of the shield and at the BNL site boundary are kept at or below 1000 mrem per year and 5 mrem/year respectively. Muon requirements are based on the Wang formula for pion spectra, and the hadron requirements on the hadron cascade program CYLKAZ of Ranft. A muon shield thickness of 77 meters of sand is indicated outside the ring in one area, and hadron shields equivalent to from 2.7 to 5.6 meters in thickness of sand above the ring. The suggested safety allowance would increase these values to 86 meters and 4.0 to 7.2 meters respectively. There are many uncertainties in such estimates, but these last figures are considered to be rather conservative

  8. Variance Function Estimation. Revision.

    Science.gov (United States)

    1987-03-01

    UNLSIFIED RFOSR-TR-87-±112 F49620-85-C-O144 F/C 12/3 NL EEEEEEh LOUA28~ ~ L53 11uLoo MICROOP REOUINTS-’HR ------ N L E U INARF-% - IS %~1 %i % 0111...and 9 jointly. If 7,, 0. and are any preliminary estimators for 71, 6. and 3. define 71 and 6 to be the solutions of (4.1) N1 IN2 (7., ’ Td " ~ - / =0P

  9. Estimating Risk Parameters

    OpenAIRE

    Aswath Damodaran

    1999-01-01

    Over the last three decades, the capital asset pricing model has occupied a central and often controversial place in most corporate finance analysts’ tool chests. The model requires three inputs to compute expected returns – a riskfree rate, a beta for an asset and an expected risk premium for the market portfolio (over and above the riskfree rate). Betas are estimated, by most practitioners, by regressing returns on an asset against a stock index, with the slope of the regression being the b...

  10. Estimating Venezuelas Latent Inflation

    OpenAIRE

    Juan Carlos Bencomo; Hugo J. Montesinos; Hugo M. Montesinos; Jose Roberto Rondo

    2011-01-01

    Percent variation of the consumer price index (CPI) is the inflation indicator most widely used. This indicator, however, has some drawbacks. In addition to measurement errors of the CPI, there is a problem of incongruence between the definition of inflation as a sustained and generalized increase of prices and the traditional measure associated with the CPI. We use data from 1991 to 2005 to estimate a complementary indicator for Venezuela, the highest inflation country in Latin America. Late...

  11. Chernobyl source term estimation

    International Nuclear Information System (INIS)

    Gudiksen, P.H.; Harvey, T.F.; Lange, R.

    1990-09-01

    The Chernobyl source term available for long-range transport was estimated by integration of radiological measurements with atmospheric dispersion modeling and by reactor core radionuclide inventory estimation in conjunction with WASH-1400 release fractions associated with specific chemical groups. The model simulations revealed that the radioactive cloud became segmented during the first day, with the lower section heading toward Scandinavia and the upper part heading in a southeasterly direction with subsequent transport across Asia to Japan, the North Pacific, and the west coast of North America. By optimizing the agreement between the observed cloud arrival times and duration of peak concentrations measured over Europe, Japan, Kuwait, and the US with the model predicted concentrations, it was possible to derive source term estimates for those radionuclides measured in airborne radioactivity. This was extended to radionuclides that were largely unmeasured in the environment by performing a reactor core radionuclide inventory analysis to obtain release fractions for the various chemical transport groups. These analyses indicated that essentially all of the noble gases, 60% of the radioiodines, 40% of the radiocesium, 10% of the tellurium and about 1% or less of the more refractory elements were released. These estimates are in excellent agreement with those obtained on the basis of worldwide deposition measurements. The Chernobyl source term was several orders of magnitude greater than those associated with the Windscale and TMI reactor accidents. However, the 137 Cs from the Chernobyl event is about 6% of that released by the US and USSR atmospheric nuclear weapon tests, while the 131 I and 90 Sr released by the Chernobyl accident was only about 0.1% of that released by the weapon tests. 13 refs., 2 figs., 7 tabs

  12. Estimating Corporate Yield Curves

    OpenAIRE

    Antionio Diaz; Frank Skinner

    2001-01-01

    This paper represents the first study of retail deposit spreads of UK financial institutions using stochastic interest rate modelling and the market comparable approach. By replicating quoted fixed deposit rates using the Black Derman and Toy (1990) stochastic interest rate model, we find that the spread between fixed and variable rates of interest can be modeled (and priced) using an interest rate swap analogy. We also find that we can estimate an individual bank deposit yield curve as a spr...

  13. Estimation of inspection effort

    International Nuclear Information System (INIS)

    Mullen, M.F.; Wincek, M.A.

    1979-06-01

    An overview of IAEA inspection activities is presented, and the problem of evaluating the effectiveness of an inspection is discussed. Two models are described - an effort model and an effectiveness model. The effort model breaks the IAEA's inspection effort into components; the amount of effort required for each component is estimated; and the total effort is determined by summing the effort for each component. The effectiveness model quantifies the effectiveness of inspections in terms of probabilities of detection and quantities of material to be detected, if diverted over a specific period. The method is applied to a 200 metric ton per year low-enriched uranium fuel fabrication facility. A description of the model plant is presented, a safeguards approach is outlined, and sampling plans are calculated. The required inspection effort is estimated and the results are compared to IAEA estimates. Some other applications of the method are discussed briefly. Examples are presented which demonstrate how the method might be useful in formulating guidelines for inspection planning and in establishing technical criteria for safeguards implementation

  14. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  15. Estimating directional epistasis

    Science.gov (United States)

    Le Rouzic, Arnaud

    2014-01-01

    Epistasis, i.e., the fact that gene effects depend on the genetic background, is a direct consequence of the complexity of genetic architectures. Despite this, most of the models used in evolutionary and quantitative genetics pay scant attention to genetic interactions. For instance, the traditional decomposition of genetic effects models epistasis as noise around the evolutionarily-relevant additive effects. Such an approach is only valid if it is assumed that there is no general pattern among interactions—a highly speculative scenario. Systematic interactions generate directional epistasis, which has major evolutionary consequences. In spite of its importance, directional epistasis is rarely measured or reported by quantitative geneticists, not only because its relevance is generally ignored, but also due to the lack of simple, operational, and accessible methods for its estimation. This paper describes conceptual and statistical tools that can be used to estimate directional epistasis from various kinds of data, including QTL mapping results, phenotype measurements in mutants, and artificial selection responses. As an illustration, I measured directional epistasis from a real-life example. I then discuss the interpretation of the estimates, showing how they can be used to draw meaningful biological inferences. PMID:25071828

  16. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  17. Nuclear material inventory estimation in solvent extraction contactors

    International Nuclear Information System (INIS)

    Beyerlein, A.; Geldard, J.

    1986-06-01

    This report describes the development of simple nuclear material (uranium and plutonium) inventory relations for mixer-settler solvent extraction contactors used in reprocessing spent nuclear fuels. The relations are developed for light water reactor fuels where the organic phase is 30% tri-n-butylphosphate (TBP) by volume. For reprocessing plants using mixer-settler contactors as much as 50% of the nuclear material within the contactors is contained in A type (aqueous to organic extraction) contactors. Another very significant portion of the contactor inventory is in the partitioning contactors. The stripping contactors contain a substantial uranium inventory but contain a very small plutonium inventory (about 5 to 10% of the total contactor inventory). The simplified inventory relations developed in this work for mixer-settler contactors reproduce the PUBG databases within about a 5% standard deviation. They can be formulated to explicitly show the dependence of the inventory on nuclear material concentrations in the aqueous feed streams. The dependence of the inventory on contactor volumes, phase volume ratios, and acid and TBP concentrations are implicitly contained in parameters that can be calculated for a particular reprocessing plant from nominal flow sheet data. The terms in the inventory relations that represent the larger portion of the inventory in A type and partitioning contactors can be extended to pulsed columns virtually without change

  18. Estimation of Lung Ventilation

    Science.gov (United States)

    Ding, Kai; Cao, Kunlin; Du, Kaifang; Amelon, Ryan; Christensen, Gary E.; Raghavan, Madhavan; Reinhardt, Joseph M.

    Since the primary function of the lung is gas exchange, ventilation can be interpreted as an index of lung function in addition to perfusion. Injury and disease processes can alter lung function on a global and/or a local level. MDCT can be used to acquire multiple static breath-hold CT images of the lung taken at different lung volumes, or with proper respiratory control, 4DCT images of the lung reconstructed at different respiratory phases. Image registration can be applied to this data to estimate a deformation field that transforms the lung from one volume configuration to the other. This deformation field can be analyzed to estimate local lung tissue expansion, calculate voxel-by-voxel intensity change, and make biomechanical measurements. The physiologic significance of the registration-based measures of respiratory function can be established by comparing to more conventional measurements, such as nuclear medicine or contrast wash-in/wash-out studies with CT or MR. An important emerging application of these methods is the detection of pulmonary function change in subjects undergoing radiation therapy (RT) for lung cancer. During RT, treatment is commonly limited to sub-therapeutic doses due to unintended toxicity to normal lung tissue. Measurement of pulmonary function may be useful as a planning tool during RT planning, may be useful for tracking the progression of toxicity to nearby normal tissue during RT, and can be used to evaluate the effectiveness of a treatment post-therapy. This chapter reviews the basic measures to estimate regional ventilation from image registration of CT images, the comparison of them to the existing golden standard and the application in radiation therapy.

  19. Estimating Subjective Probabilities

    DEFF Research Database (Denmark)

    Andersen, Steffen; Fountain, John; Harrison, Glenn W.

    2014-01-01

    either construct elicitation mechanisms that control for risk aversion, or construct elicitation mechanisms which undertake 'calibrating adjustments' to elicited reports. We illustrate how the joint estimation of risk attitudes and subjective probabilities can provide the calibration adjustments...... that theory calls for. We illustrate this approach using data from a controlled experiment with real monetary consequences to the subjects. This allows the observer to make inferences about the latent subjective probability, under virtually any well-specified model of choice under subjective risk, while still...

  20. Estimating NHL Scoring Rates

    OpenAIRE

    Buttrey, Samuel E.; Washburn, Alan R.; Price, Wilson L.; Operations Research

    2011-01-01

    The article of record as published may be located at http://dx.doi.org/10.2202/1559-0410.1334 We propose a model to estimate the rates at which NHL teams score and yield goals. In the model, goals occur as if from a Poisson process whose rate depends on the two teams playing, the home-ice advantage, and the manpower (power-play, short-handed) situation. Data on all the games from the 2008-2009 season was downloaded and processed into a form suitable for the analysis. The model...

  1. Risk estimation and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, R A.D.

    1982-10-01

    Risk assessment involves subjectivity, which makes objective decision making difficult in the nuclear power debate. The author reviews the process and uncertainties of estimating risks as well as the potential for misinterpretation and misuse. Risk data from a variety of aspects cannot be summed because the significance of different risks is not comparable. A method for including political, social, moral, psychological, and economic factors, environmental impacts, catastrophes, and benefits in the evaluation process could involve a broad base of lay and technical consultants, who would explain and argue their evaluation positions. 15 references. (DCK)

  2. Estimating Gear Teeth Stiffness

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2013-01-01

    The estimation of gear stiffness is important for determining the load distribution between the gear teeth when two sets of teeth are in contact. Two factors have a major influence on the stiffness; firstly the boundary condition through the gear rim size included in the stiffness calculation...... and secondly the size of the contact. In the FE calculation the true gear tooth root profile is applied. The meshing stiffness’s of gears are highly non-linear, it is however found that the stiffness of an individual tooth can be expressed in a linear form assuming that the contact length is constant....

  3. Mixtures Estimation and Applications

    CERN Document Server

    Mengersen, Kerrie; Titterington, Mike

    2011-01-01

    This book uses the EM (expectation maximization) algorithm to simultaneously estimate the missing data and unknown parameter(s) associated with a data set. The parameters describe the component distributions of the mixture; the distributions may be continuous or discrete. The editors provide a complete account of the applications, mathematical structure and statistical analysis of finite mixture distributions along with MCMC computational methods, together with a range of detailed discussions covering the applications of the methods and features chapters from the leading experts on the subject

  4. Robust Wave Resource Estimation

    DEFF Research Database (Denmark)

    Lavelle, John; Kofoed, Jens Peter

    2013-01-01

    density estimates of the PDF as a function both of Hm0 and Tp, and Hm0 and T0;2, together with the mean wave power per unit crest length, Pw, as a function of Hm0 and T0;2. The wave elevation parameters, from which the wave parameters are calculated, are filtered to correct or remove spurious data....... An overview is given of the methods used to do this, and a method for identifying outliers of the wave elevation data, based on the joint distribution of wave elevations and accelerations, is presented. The limitations of using a JONSWAP spectrum to model the measured wave spectra as a function of Hm0 and T0......;2 or Hm0 and Tp for the Hanstholm site data are demonstrated. As an alternative, the non-parametric loess method, which does not rely on any assumptions about the shape of the wave elevation spectra, is used to accurately estimate Pw as a function of Hm0 and T0;2....

  5. Estimations of actual availability

    International Nuclear Information System (INIS)

    Molan, M.; Molan, G.

    2001-01-01

    Adaptation of working environment (social, organizational, physical and physical) should assure higher level of workers' availability and consequently higher level of workers' performance. A special theoretical model for description of connections between environmental factors, human availability and performance was developed and validated. The central part of the model is evaluations of human actual availability in the real working situation or fitness for duties self-estimation. The model was tested in different working environments. On the numerous (2000) workers, standardized values and critical limits for an availability questionnaire were defined. Standardized method was used in identification of the most important impact of environmental factors. Identified problems were eliminated by investments in the organization in modification of selection and training procedures in humanization of working .environment. For workers with behavioural and health problems individual consultancy was offered. The described method is a tool for identification of impacts. In combination with behavioural analyses and mathematical analyses of connections, it offers possibilities to keep adequate level of human availability and fitness for duty in each real working situation. The model should be a tool for achieving adequate level of nuclear safety by keeping the adequate level of workers' availability and fitness for duty. For each individual worker possibility for estimation of level of actual fitness for duty is possible. Effects of prolonged work and additional tasks should be evaluated. Evaluations of health status effects and ageing are possible on the individual level. (author)

  6. Comparison of variance estimators for metaanalysis of instrumental variable estimates

    NARCIS (Netherlands)

    Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.

    2016-01-01

    Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two

  7. Introduction to variance estimation

    CERN Document Server

    Wolter, Kirk M

    2007-01-01

    We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...

  8. Estimating Discount Rates

    Directory of Open Access Journals (Sweden)

    Laurence Booth

    2015-04-01

    Full Text Available Discount rates are essential to applied finance, especially in setting prices for regulated utilities and valuing the liabilities of insurance companies and defined benefit pension plans. This paper reviews the basic building blocks for estimating discount rates. It also examines market risk premiums, as well as what constitutes a benchmark fair or required rate of return, in the aftermath of the financial crisis and the U.S. Federal Reserve’s bond-buying program. Some of the results are disconcerting. In Canada, utilities and pension regulators responded to the crash in different ways. Utilities regulators haven’t passed on the full impact of low interest rates, so that consumers face higher prices than they should whereas pension regulators have done the opposite, and forced some contributors to pay more. In both cases this is opposite to the desired effect of monetary policy which is to stimulate aggregate demand. A comprehensive survey of global finance professionals carried out last year provides some clues as to where adjustments are needed. In the U.S., the average equity market required return was estimated at 8.0 per cent; Canada’s is 7.40 per cent, due to the lower market risk premium and the lower risk-free rate. This paper adds a wealth of historic and survey data to conclude that the ideal base long-term interest rate used in risk premium models should be 4.0 per cent, producing an overall expected market return of 9-10.0 per cent. The same data indicate that allowed returns to utilities are currently too high, while the use of current bond yields in solvency valuations of pension plans and life insurers is unhelpful unless there is a realistic expectation that the plans will soon be terminated.

  9. Toxicity Estimation Software Tool (TEST)

    Science.gov (United States)

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  10. Sampling and estimating recreational use.

    Science.gov (United States)

    Timothy G. Gregoire; Gregory J. Buhyoff

    1999-01-01

    Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.

  11. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying; Chang, Xiaohui; Guan, Yongtao

    2018-01-01

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  12. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying

    2018-01-11

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  13. Improved Estimates of Thermodynamic Parameters

    Science.gov (United States)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  14. State estimation in networked systems

    NARCIS (Netherlands)

    Sijs, J.

    2012-01-01

    This thesis considers state estimation strategies for networked systems. State estimation refers to a method for computing the unknown state of a dynamic process by combining sensor measurements with predictions from a process model. The most well known method for state estimation is the Kalman

  15. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  16. Uveal melanoma: Estimating prognosis

    Directory of Open Access Journals (Sweden)

    Swathi Kaliki

    2015-01-01

    Full Text Available Uveal melanoma is the most common primary malignant tumor of the eye in adults, predominantly found in Caucasians. Local tumor control of uveal melanoma is excellent, yet this malignancy is associated with relatively high mortality secondary to metastasis. Various clinical, histopathological, cytogenetic features and gene expression features help in estimating the prognosis of uveal melanoma. The clinical features associated with poor prognosis in patients with uveal melanoma include older age at presentation, male gender, larger tumor basal diameter and thickness, ciliary body location, diffuse tumor configuration, association with ocular/oculodermal melanocytosis, extraocular tumor extension, and advanced tumor staging by American Joint Committee on Cancer classification. Histopathological features suggestive of poor prognosis include epithelioid cell type, high mitotic activity, higher values of mean diameter of ten largest nucleoli, higher microvascular density, extravascular matrix patterns, tumor-infiltrating lymphocytes, tumor-infiltrating macrophages, higher expression of insulin-like growth factor-1 receptor, and higher expression of human leukocyte antigen Class I and II. Monosomy 3, 1p loss, 6q loss, and 8q and those classified as Class II by gene expression are predictive of poor prognosis of uveal melanoma. In this review, we discuss the prognostic factors of uveal melanoma. A database search was performed on PubMed, using the terms "uvea," "iris," "ciliary body," "choroid," "melanoma," "uveal melanoma" and "prognosis," "metastasis," "genetic testing," "gene expression profiling." Relevant English language articles were extracted, reviewed, and referenced appropriately.

  17. Approaches to estimating decommissioning costs

    International Nuclear Information System (INIS)

    Smith, R.I.

    1990-07-01

    The chronological development of methodology for estimating the cost of nuclear reactor power station decommissioning is traced from the mid-1970s through 1990. Three techniques for developing decommissioning cost estimates are described. The two viable techniques are compared by examining estimates developed for the same nuclear power station using both methods. The comparison shows that the differences between the estimates are due largely to differing assumptions regarding the size of the utility and operating contractor overhead staffs. It is concluded that the two methods provide bounding estimates on a range of manageable costs, and provide reasonable bases for the utility rate adjustments necessary to pay for future decommissioning costs. 6 refs

  18. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  19. A new estimator for vector velocity estimation [medical ultrasonics

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2001-01-01

    A new estimator for determining the two-dimensional velocity vector using a pulsed ultrasound field is derived. The estimator uses a transversely modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation...... be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce the influence of a spatial velocity spread. Examples for different velocity vectors and field conditions are shown using both simple and more complex field simulations. A relative accuracy of 10.1% is obtained...

  20. Estimation of Water Quality

    International Nuclear Information System (INIS)

    Vetrinskaya, N.I.; Manasbayeva, A.B.

    1998-01-01

    Water has a particular ecological function and it is an indicator of the general state of the biosphere. In relation with this summary, the toxicological evaluation of water by biologic testing methods is very actual. The peculiarity of biologic testing information is an integral reflection of all totality properties of examination of the environment in position of its perception by living objects. Rapid integral evaluation of anthropological situation is a base aim of biologic testing. If this evaluation has deviations from normal state, detailed analysis and revelation of dangerous components could be conducted later. The quality of water from the Degelen gallery, where nuclear explosions were conducted, was investigated by bio-testing methods. The micro-organisms (Micrococcus Luteus, Candida crusei, Pseudomonas algaligenes) and water plant elodea (Elodea canadensis Rich) were used as test-objects. It is known that the transporting functions of cell membranes of living organisms are violated the first time in extreme conditions by difference influences. Therefore, ion penetration of elodeas and micro-organisms cells, which contained in the examination water with toxicants, were used as test-function. Alteration of membrane penetration was estimated by measurement of electrolytes electrical conductivity, which gets out from living objects cells to distillate water. Index of water toxic is ratio of electrical conductivity in experience to electrical conductivity in control. Also, observations from common state of plant, which was incubated in toxic water, were made. (Chronic experience conducted for 60 days.) The plants were incubated in water samples, which were picked out from gallery in the years 1996 and 1997. The time of incubation is 1-10 days. The results of investigation showed that ion penetration of elodeas and micro-organisms cells changed very much with influence of radionuclides, which were contained in testing water. Changes are taking place even in

  1. WAYS HIERARCHY OF ACCOUNTING ESTIMATES

    Directory of Open Access Journals (Sweden)

    ŞERBAN CLAUDIU VALENTIN

    2015-03-01

    Full Text Available Based on one hand on the premise that the estimate is an approximate evaluation, completed with the fact that the term estimate is increasingly common and used by a variety of both theoretical and practical areas, particularly in situations where we can not decide ourselves with certainty, it must be said that, in fact, we are dealing with estimates and in our case with an accounting estimate. Completing on the other hand the idea above with the phrase "estimated value", which implies that we are dealing with a value obtained from an evaluation process, but its size is not exact but approximated, meaning is close to the actual size, it becomes obvious the neccessity to delimit the hierarchical relationship between evaluation / estimate while considering the context in which the evaluation activity is derulated at entity level.

  2. Spring Small Grains Area Estimation

    Science.gov (United States)

    Palmer, W. F.; Mohler, R. J.

    1986-01-01

    SSG3 automatically estimates acreage of spring small grains from Landsat data. Report describes development and testing of a computerized technique for using Landsat multispectral scanner (MSS) data to estimate acreage of spring small grains (wheat, barley, and oats). Application of technique to analysis of four years of data from United States and Canada yielded estimates of accuracy comparable to those obtained through procedures that rely on trained analysis.

  3. Parameter estimation in plasmonic QED

    Science.gov (United States)

    Jahromi, H. Rangani

    2018-03-01

    We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.

  4. Quantity Estimation Of The Interactions

    International Nuclear Information System (INIS)

    Gorana, Agim; Malkaj, Partizan; Muda, Valbona

    2007-01-01

    In this paper we present some considerations about quantity estimations, regarding the range of interaction and the conservations laws in various types of interactions. Our estimations are done under classical and quantum point of view and have to do with the interaction's carriers, the radius, the influence range and the intensity of interactions

  5. CONDITIONS FOR EXACT CAVALIERI ESTIMATION

    Directory of Open Access Journals (Sweden)

    Mónica Tinajero-Bravo

    2014-03-01

    Full Text Available Exact Cavalieri estimation amounts to zero variance estimation of an integral with systematic observations along a sampling axis. A sufficient condition is given, both in the continuous and the discrete cases, for exact Cavalieri sampling. The conclusions suggest improvements on the current stereological application of fractionator-type sampling.

  6. Optimization of Barron density estimates

    Czech Academy of Sciences Publication Activity Database

    Vajda, Igor; van der Meulen, E. C.

    2001-01-01

    Roč. 47, č. 5 (2001), s. 1867-1883 ISSN 0018-9448 R&D Projects: GA ČR GA102/99/1137 Grant - others:Copernicus(XE) 579 Institutional research plan: AV0Z1075907 Keywords : Barron estimator * chi-square criterion * density estimation Subject RIV: BD - Theory of Information Impact factor: 2.077, year: 2001

  7. Stochastic Estimation via Polynomial Chaos

    Science.gov (United States)

    2015-10-01

    AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic

  8. Bayesian estimates of linkage disequilibrium

    Directory of Open Access Journals (Sweden)

    Abad-Grau María M

    2007-06-01

    Full Text Available Abstract Background The maximum likelihood estimator of D' – a standard measure of linkage disequilibrium – is biased toward disequilibrium, and the bias is particularly evident in small samples and rare haplotypes. Results This paper proposes a Bayesian estimation of D' to address this problem. The reduction of the bias is achieved by using a prior distribution on the pair-wise associations between single nucleotide polymorphisms (SNPs that increases the likelihood of equilibrium with increasing physical distances between pairs of SNPs. We show how to compute the Bayesian estimate using a stochastic estimation based on MCMC methods, and also propose a numerical approximation to the Bayesian estimates that can be used to estimate patterns of LD in large datasets of SNPs. Conclusion Our Bayesian estimator of D' corrects the bias toward disequilibrium that affects the maximum likelihood estimator. A consequence of this feature is a more objective view about the extent of linkage disequilibrium in the human genome, and a more realistic number of tagging SNPs to fully exploit the power of genome wide association studies.

  9. Reactivity estimation using digital nonlinear H∞ estimator for VHTRC experiment

    International Nuclear Information System (INIS)

    Suzuki, Katsuo; Nabeshima, Kunihiko; Yamane, Tsuyoshi

    2003-01-01

    On-line and real-time estimation of time-varying reactivity in a nuclear reactor in necessary for early detection of reactivity anomaly and safe operation. Using a digital nonlinear H ∞ estimator, an experiment of real-time dynamic reactivity estimation was carried out in the Very High Temperature Reactor Critical Assembly (VHTRC) of Japan Atomic Energy Research Institute. Some technical issues of the experiment are described, such as reactivity insertion, data sampling frequency, anti-aliasing filter, experimental circuit and digitalising nonlinear H ∞ reactivity estimator, and so on. Then, we discussed the experimental results obtained by the digital nonlinear H ∞ estimator with sampled data of the nuclear instrumentation signal for the power responses under various reactivity insertions. Good performances of estimated reactivity were observed, with almost no delay to the true reactivity and sufficient accuracy between 0.05 cent and 0.1 cent. The experiment shows that real-time reactivity for data sampling period of 10 ms can be certainly realized. From the results of the experiment, it is concluded that the digital nonlinear H ∞ reactivity estimator can be applied as on-line real-time reactivity meter for actual nuclear plants. (author)

  10. Age estimation in the living

    DEFF Research Database (Denmark)

    Tangmose, Sara; Thevissen, Patrick; Lynnerup, Niels

    2015-01-01

    A radiographic assessment of third molar development is essential for differentiating between juveniles and adolescents in forensic age estimations. As the developmental stages of third molars are highly correlated, age estimates based on a combination of a full set of third molar scores...... are statistically complicated. Transition analysis (TA) is a statistical method developed for estimating age at death in skeletons, which combines several correlated developmental traits into one age estimate including a 95% prediction interval. The aim of this study was to evaluate the performance of TA...... in the living on a full set of third molar scores. A cross sectional sample of 854 panoramic radiographs, homogenously distributed by sex and age (15.0-24.0 years), were randomly split in two; a reference sample for obtaining age estimates including a 95% prediction interval according to TA; and a validation...

  11. UNBIASED ESTIMATORS OF SPECIFIC CONNECTIVITY

    Directory of Open Access Journals (Sweden)

    Jean-Paul Jernot

    2011-05-01

    Full Text Available This paper deals with the estimation of the specific connectivity of a stationary random set in IRd. It turns out that the "natural" estimator is only asymptotically unbiased. The example of a boolean model of hypercubes illustrates the amplitude of the bias produced when the measurement field is relatively small with respect to the range of the random set. For that reason unbiased estimators are desired. Such an estimator can be found in the literature in the case where the measurement field is a right parallelotope. In this paper, this estimator is extended to apply to measurement fields of various shapes, and to possess a smaller variance. Finally an example from quantitative metallography (specific connectivity of a population of sintered bronze particles is given.

  12. Laser cost experience and estimation

    International Nuclear Information System (INIS)

    Shofner, F.M.; Hoglund, R.L.

    1977-01-01

    This report addresses the question of estimating the capital and operating costs for LIS (Laser Isotope Separation) lasers, which have performance requirements well beyond the state of mature art. This question is seen with different perspectives by political leaders, ERDA administrators, scientists, and engineers concerned with reducing LIS to economically successful commercial practice, on a timely basis. Accordingly, this report attempts to provide ''ballpark'' estimators for capital and operating costs and useful design and operating information for lasers based on mature technology, and their LIS analogs. It is written very basically and is intended to respond about equally to the perspectives of administrators, scientists, and engineers. Its major contributions are establishing the current, mature, industrialized laser track record (including capital and operating cost estimators, reliability, types of application, etc.) and, especially, evolution of generalized estimating procedures for capital and operating cost estimators for new laser design

  13. Estimation of toxicity using the Toxicity Estimation Software Tool (TEST)

    Science.gov (United States)

    Tens of thousands of chemicals are currently in commerce, and hundreds more are introduced every year. Since experimental measurements of toxicity are extremely time consuming and expensive, it is imperative that alternative methods to estimate toxicity are developed.

  14. Dynamic materials accounting for solvent-extraction systems

    Energy Technology Data Exchange (ETDEWEB)

    Cobb, D.D.; Ostenak, C.A.

    1979-01-01

    Methods for estimating nuclear materials inventories in solvent-extraction contactors are being developed. These methods employ chemical models and available process measurements. Comparisons of model calculations and experimental data for mixer-settlers and pulsed columns indicate that this approach should be adequate for effective near-real-time materials accounting in nuclear fuels reprocessing plants.

  15. Dynamic materials accounting for solvent-extraction systems

    International Nuclear Information System (INIS)

    Cobb, D.D.; Ostenak, C.A.

    1979-01-01

    Methods for estimating nuclear materials inventories in solvent-extraction contactors are being developed. These methods employ chemical models and available process measurements. Comparisons of model calculations and experimental data for mixer-settlers and pulsed columns indicate that this approach should be adequate for effective near-real-time materials accounting in nuclear fuels reprocessing plants

  16. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  17. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  18. Radiation dose estimates for radiopharmaceuticals

    International Nuclear Information System (INIS)

    Stabin, M.G.; Stubbs, J.B.; Toohey, R.E.

    1996-04-01

    Tables of radiation dose estimates based on the Cristy-Eckerman adult male phantom are provided for a number of radiopharmaceuticals commonly used in nuclear medicine. Radiation dose estimates are listed for all major source organs, and several other organs of interest. The dose estimates were calculated using the MIRD Technique as implemented in the MIRDOSE3 computer code, developed by the Oak Ridge Institute for Science and Education, Radiation Internal Dose Information Center. In this code, residence times for source organs are used with decay data from the MIRD Radionuclide Data and Decay Schemes to produce estimates of radiation dose to organs of standardized phantoms representing individuals of different ages. The adult male phantom of the Cristy-Eckerman phantom series is different from the MIRD 5, or Reference Man phantom in several aspects, the most important of which is the difference in the masses and absorbed fractions for the active (red) marrow. The absorbed fractions for flow energy photons striking the marrow are also different. Other minor differences exist, but are not likely to significantly affect dose estimates calculated with the two phantoms. Assumptions which support each of the dose estimates appears at the bottom of the table of estimates for a given radiopharmaceutical. In most cases, the model kinetics or organ residence times are explicitly given. The results presented here can easily be extended to include other radiopharmaceuticals or phantoms

  19. Risk estimation using probability machines

    Science.gov (United States)

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  20. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  1. Generalized Centroid Estimators in Bioinformatics

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  2. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  3. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaë l; Davison, Anthony C.; Genton, Marc G.

    2015-01-01

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  4. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaël

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  5. Analytical estimates of structural behavior

    CERN Document Server

    Dym, Clive L

    2012-01-01

    Explicitly reintroducing the idea of modeling to the analysis of structures, Analytical Estimates of Structural Behavior presents an integrated approach to modeling and estimating the behavior of structures. With the increasing reliance on computer-based approaches in structural analysis, it is becoming even more important for structural engineers to recognize that they are dealing with models of structures, not with the actual structures. As tempting as it is to run innumerable simulations, closed-form estimates can be effectively used to guide and check numerical results, and to confirm phys

  6. Phase estimation in optical interferometry

    CERN Document Server

    Rastogi, Pramod

    2014-01-01

    Phase Estimation in Optical Interferometry covers the essentials of phase-stepping algorithms used in interferometry and pseudointerferometric techniques. It presents the basic concepts and mathematics needed for understanding the phase estimation methods in use today. The first four chapters focus on phase retrieval from image transforms using a single frame. The next several chapters examine the local environment of a fringe pattern, give a broad picture of the phase estimation approach based on local polynomial phase modeling, cover temporal high-resolution phase evaluation methods, and pre

  7. An Analytical Cost Estimation Procedure

    National Research Council Canada - National Science Library

    Jayachandran, Toke

    1999-01-01

    Analytical procedures that can be used to do a sensitivity analysis of a cost estimate, and to perform tradeoffs to identify input values that can reduce the total cost of a project, are described in the report...

  8. Spectral unmixing: estimating partial abundances

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-01-01

    Full Text Available techniques is complicated when considering very similar spectral signatures. Iron-bearing oxide/hydroxide/sulfate minerals have similar spectral signatures. The study focuses on how could estimates of abundances of spectrally similar iron-bearing oxide...

  9. 50th Percentile Rent Estimates

    Data.gov (United States)

    Department of Housing and Urban Development — Rent estimates at the 50th percentile (or median) are calculated for all Fair Market Rent areas. Fair Market Rents (FMRs) are primarily used to determine payment...

  10. LPS Catch and Effort Estimation

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Data collected from the LPS dockside (LPIS) and the LPS telephone (LPTS) surveys are combined to produce estimates of total recreational catch, landings, and fishing...

  11. Exploratory shaft liner corrosion estimate

    International Nuclear Information System (INIS)

    Duncan, D.R.

    1985-10-01

    An estimate of expected corrosion degradation during the 100-year design life of the Exploratory Shaft (ES) is presented. The basis for the estimate is a brief literature survey of corrosion data, in addition to data taken by the Basalt Waste Isolation Project. The scope of the study is expected corrosion environment of the ES, the corrosion modes of general corrosion, pitting and crevice corrosion, dissimilar metal corrosion, and environmentally assisted cracking. The expected internal and external environment of the shaft liner is described in detail and estimated effects of each corrosion mode are given. The maximum amount of general corrosion degradation was estimated to be 70 mils at the exterior and 48 mils at the interior, at the shaft bottom. Corrosion at welds or mechanical joints could be significant, dependent on design. After a final determination of corrosion allowance has been established by the project it will be added to the design criteria. 10 refs., 6 figs., 5 tabs

  12. Project Cost Estimation for Planning

    Science.gov (United States)

    2010-02-26

    For Nevada Department of Transportation (NDOT), there are far too many projects that ultimately cost much more than initially planned. Because project nominations are linked to estimates of future funding and the analysis of system needs, the inaccur...

  13. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  14. Estimating Emissions from Railway Traffic

    DEFF Research Database (Denmark)

    Jørgensen, Morten W.; Sorenson, Spencer C.

    1998-01-01

    Several parameters of importance for estimating emissions from railway traffic are discussed, and typical results presented. Typical emissions factors from diesel engines and electrical power generation are presented, and the effect of differences in national electrical generation sources...

  15. Travel time estimation using Bluetooth.

    Science.gov (United States)

    2015-06-01

    The objective of this study was to investigate the feasibility of using a Bluetooth Probe Detection System (BPDS) to : estimate travel time in an urban area. Specifically, the study investigated the possibility of measuring overall congestion, the : ...

  16. Estimating uncertainty in resolution tests

    CSIR Research Space (South Africa)

    Goncalves, DP

    2006-05-01

    Full Text Available frequencies yields a biased estimate, and we provide an improved estimator. An application illustrates how the results derived can be incorporated into a larger un- certainty analysis. ? 2006 Society of Photo-Optical Instrumentation Engineers. H20851DOI: 10....1117/1.2202914H20852 Subject terms: resolution testing; USAF 1951 test target; resolution uncertainity. Paper 050404R received May 20, 2005; revised manuscript received Sep. 2, 2005; accepted for publication Sep. 9, 2005; published online May 10, 2006. 1...

  17. Estimating solar radiation in Ghana

    International Nuclear Information System (INIS)

    Anane-Fenin, K.

    1986-04-01

    The estimates of global radiation on a horizontal surface for 9 towns in Ghana, West Africa, are deduced from their sunshine data using two methods developed by Angstrom and Sabbagh. An appropriate regional parameter is determined with the first method and used to predict solar irradiation in all the 9 stations with an accuracy better than 15%. Estimation of diffuse solar irradiation by Page, Lin and Jordan and three other authors' correlation are performed and the results examined. (author)

  18. The Psychology of Cost Estimating

    Science.gov (United States)

    Price, Andy

    2016-01-01

    Cost estimation for large (and even not so large) government programs is a challenge. The number and magnitude of cost overruns associated with large Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) programs highlight the difficulties in developing and promulgating accurate cost estimates. These overruns can be the result of inadequate technology readiness or requirements definition, the whims of politicians or government bureaucrats, or even as failures of the cost estimating profession itself. However, there may be another reason for cost overruns that is right in front of us, but only recently have we begun to grasp it: the fact that cost estimators and their customers are human. The last 70+ years of research into human psychology and behavioral economics have yielded amazing findings into how we humans process and use information to make judgments and decisions. What these scientists have uncovered is surprising: humans are often irrational and illogical beings, making decisions based on factors such as emotion and perception, rather than facts and data. These built-in biases to our thinking directly affect how we develop our cost estimates and how those cost estimates are used. We cost estimators can use this knowledge of biases to improve our cost estimates and also to improve how we communicate and work with our customers. By understanding how our customers think, and more importantly, why they think the way they do, we can have more productive relationships and greater influence. By using psychology to our advantage, we can more effectively help the decision maker and our organizations make fact-based decisions.

  19. Estimating emissions from railway traffic

    Energy Technology Data Exchange (ETDEWEB)

    Joergensen, M.W.; Sorenson, C.

    1997-07-01

    The report discusses methods that can be used to estimate the emissions from various kinds of railway traffic. The methods are based on the estimation of the energy consumption of the train, so that comparisons can be made between electric and diesel driven trains. Typical values are given for the necessary traffic parameters, emission factors, and train loading. Detailed models for train energy consumption are presented, as well as empirically based methods using average train speed and distance between stop. (au)

  20. Efficient, Differentially Private Point Estimators

    OpenAIRE

    Smith, Adam

    2008-01-01

    Differential privacy is a recent notion of privacy for statistical databases that provides rigorous, meaningful confidentiality guarantees, even in the presence of an attacker with access to arbitrary side information. We show that for a large class of parametric probability models, one can construct a differentially private estimator whose distribution converges to that of the maximum likelihood estimator. In particular, it is efficient and asymptotically unbiased. This result provides (furt...

  1. Computer-Aided Parts Estimation

    OpenAIRE

    Cunningham, Adam; Smart, Robert

    1993-01-01

    In 1991, Ford Motor Company began deployment of CAPE (computer-aided parts estimating system), a highly advanced knowledge-based system designed to generate, evaluate, and cost automotive part manufacturing plans. cape is engineered on an innovative, extensible, declarative process-planning and estimating knowledge representation language, which underpins the cape kernel architecture. Many manufacturing processes have been modeled to date, but eventually every significant process in motor veh...

  2. Guideline to Estimate Decommissioning Costs

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Taesik; Kim, Younggook; Oh, Jaeyoung [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    The primary objective of this work is to provide guidelines to estimate the decommissioning cost as well as the stakeholders with plausible information to understand the decommissioning activities in a reasonable manner, which eventually contribute to acquiring the public acceptance for the nuclear power industry. Although several cases of the decommissioning cost estimate have been made for a few commercial nuclear power plants, the different technical, site-specific and economic assumptions used make it difficult to interpret those cost estimates and compare them with that of a relevant plant. Trustworthy cost estimates are crucial to plan a safe and economic decommissioning project. The typical approach is to break down the decommissioning project into a series of discrete and measurable work activities. Although plant specific differences derived from the economic and technical assumptions make a licensee difficult to estimate reliable decommissioning costs, estimating decommissioning costs is the most crucial processes since it encompasses all the spectrum of activities from the planning to the final evaluation on whether a decommissioning project has successfully been preceded from the perspective of safety and economic points. Hence, it is clear that tenacious efforts should be needed to successfully perform the decommissioning project.

  3. The Seductive-Plausibility of Patent Hold-Up Myths — A Flawed Historiography of Patents

    DEFF Research Database (Denmark)

    Howells, John; Katznelson, Ron D

    In previous work we have shown that a flawed historiography of patents continues to be the basis for patent policy advocacy. We set out objective standards of evidence that allegations of development block due to assertion of patents must meet. We show the extent of the errors in the historical...... record in the aircraft, automobile, radio and incandescent lamp technologies. We then evaluate how they measure against the objective standards. We find many simple errors and that an absence of indicia of development block characterise scholarship alleging that assertion of patents blocked development...... of multiple case studies subjected to such standards justifies the rebuttable presumption that “pioneer patents have never blocked development”....

  4. A parametric study of powder holdups in a packed bed under ...

    African Journals Online (AJOL)

    Nafiisah

    Packed bed, turbulent flow, mathematical modelling, decreasing ..... The vertical gauge pressure distribution, at a distance of 0.06 m away from the tuyere ... fines from these locations as the interactive forces are more than the drag forces. It.

  5. Characterization of process holdup material at the Portsmouth Gaseous Diffusion Plant

    International Nuclear Information System (INIS)

    Boyd, D.E.; Miller, R.R.

    1986-01-01

    The cascade material balance area at the Portsmouth Gaseous Diffusion Plant is characterized by continuous, large, in-process inventories of gaseous uranium hexafluoride (UF 6 ) and very large inputs and outputs of UF 6 over a complete range of 235 U enrichments. Monthly inventories are conducted to quantify the in-place material, but the inventory techniques are blind to material not in the gas phase. Material is removed from the gas phase by any one of four mechanisms: (1) freeze-outs which are the solidification of UF 6 , (2) inleakage of wet air which produces solid uranium oxyfluorides, (3) consumption of uranium through UF 6 reaction with internal metal surfaces, and (4) adsorption of UF 6 on internal surfaces. This presentation describes efforts to better characterize and, where possible, to eliminate or reduce the effects of these mechanisms on material accountability. Freeze-outs and wet air deposits occur under absormal operating conditions, and techniques are available to prevent, detect and reverse them. Consumption and adsorption occur under normal operating conditions and are more complex to manage, however, computer models have been developed to quantify monthly the net effects due to consumption and adsorption. These models have shown that consumption and adsorption effects on inventory differences are significant

  6. The importance of understanding process holdup. A Department of Energy (DOE) view

    International Nuclear Information System (INIS)

    Hammond, G.A.; Hawkins, R.L.

    1988-01-01

    Residual material in processing equipment is today and will in the future continue to be one of the major problems in controlling and accounting for nuclear material. Existing facilities were designed for product quantity, quality, and safety; not for minimization and quantification of residual material in process. With the development of measurement systems, and with enhanced material control and accounting practices and procedures, inventory differences have been emphasized. The improvement in processing input and output measurements has highlighted the problems of quantifying residual process material. The primary purpose for quantifying material held up in process is to determine the inventory difference and its related uncertainties or statistical variances. This qualification and its associated problems must be addressed if we are to prevent, deter, and detect theft and/or diversion of nuclear materials

  7. Comparison of density estimators. [Estimation of probability density functions

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.; Monahan, J.F.

    1977-09-01

    Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)

  8. Weldon Spring historical dose estimate

    International Nuclear Information System (INIS)

    Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.

    1986-07-01

    This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr

  9. Weldon Spring historical dose estimate

    Energy Technology Data Exchange (ETDEWEB)

    Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.

    1986-07-01

    This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr.

  10. An improved estimation and focusing scheme for vector velocity estimation

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Munk, Peter

    1999-01-01

    to reduce spatial velocity dispersion. Examples of different velocity vector conditions are shown using the Field II simulation program. A relative accuracy of 10.1 % is obtained for the lateral velocity estimates for a parabolic velocity profile for a flow perpendicular to the ultrasound beam and a signal...

  11. Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    of such signals from unconstrained frequency estimates (UFEs). A minimum variance distortionless response (MVDR) method is proposed as an optimal solution to minimize the variance of UFEs considering the constraint of integer harmonics. The MVDR filter is designed based on noise statistics making it robust...

  12. estimating formwork striking time for concrete mixes estimating

    African Journals Online (AJOL)

    eobe

    In this study, we estimated the time for strength development in concrete cured up to 56 days. Water. In this .... regression analysis using MS Excel 2016 Software performed on the ..... [1] Abolfazl, K. R, Peroti S. and Rahemi L 'The Effect of.

  13. Moving Horizon Estimation and Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp

    successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control...... problems. Chapter 1 motivates moving horizon estimation and control as a paradigm for control of industrial processes. It introduces the extended linear quadratic control problem and discusses its central role in moving horizon estimation and control. Introduction, application and efficient solution....... It provides an algorithm for computation of the maximal output admissible set for linear model predictive control. Appendix D provides results concerning linear regression. Appendix E discuss prediction error methods for identification of linear models tailored for model predictive control....

  14. Heuristic introduction to estimation methods

    International Nuclear Information System (INIS)

    Feeley, J.J.; Griffith, J.M.

    1982-08-01

    The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems

  15. Estimation of effective wind speed

    Science.gov (United States)

    Østergaard, K. Z.; Brath, P.; Stoustrup, J.

    2007-07-01

    The wind speed has a huge impact on the dynamic response of wind turbine. Because of this, many control algorithms use a measure of the wind speed to increase performance, e.g. by gain scheduling and feed forward. Unfortunately, no accurate measurement of the effective wind speed is online available from direct measurements, which means that it must be estimated in order to make such control methods applicable in practice. In this paper a new method is presented for the estimation of the effective wind speed. First, the rotor speed and aerodynamic torque are estimated by a combined state and input observer. These two variables combined with the measured pitch angle is then used to calculate the effective wind speed by an inversion of a static aerodynamic model.

  16. Estimation and valuation in accounting

    Directory of Open Access Journals (Sweden)

    Cicilia Ionescu

    2014-03-01

    Full Text Available The relationships of the enterprise with the external environment give rise to a range of informational needs. Satisfying those needs requires the production of coherent, comparable, relevant and reliable information included into the individual or consolidated financial statements. International Financial Reporting Standards IAS / IFRS aim to ensure the comparability and relevance of the accounting information, providing, among other things, details about the issue of accounting estimates and changes in accounting estimates. Valuation is a process continually used, in order to assign values to the elements that are to be recognised in the financial statements. Most of the times, the values reflected in the books are clear, they are recorded in the contracts with third parties, in the supporting documents, etc. However, the uncertainties in which a reporting entity operates determines that, sometimes, the assigned or values attributable to some items composing the financial statements be determined by use estimates.

  17. Integral Criticality Estimators in MCATK

    Energy Technology Data Exchange (ETDEWEB)

    Nolen, Steven Douglas [Los Alamos National Laboratory; Adams, Terry R. [Los Alamos National Laboratory; Sweezy, Jeremy Ed [Los Alamos National Laboratory

    2016-06-14

    The Monte Carlo Application ToolKit (MCATK) is a component-based software toolset for delivering customized particle transport solutions using the Monte Carlo method. Currently under development in the XCP Monte Carlo group at Los Alamos National Laboratory, the toolkit has the ability to estimate the ke f f and a eigenvalues for static geometries. This paper presents a description of the estimators and variance reduction techniques available in the toolkit and includes a preview of those slated for future releases. Along with the description of the underlying algorithms is a description of the available user inputs for controlling the iterations. The paper concludes with a comparison of the MCATK results with those provided by analytic solutions. The results match within expected statistical uncertainties and demonstrate MCATK’s usefulness in estimating these important quantities.

  18. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  19. Methods for estimating the semivariogram

    DEFF Research Database (Denmark)

    Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle

    2002-01-01

    . In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...

  20. Albedo estimation for scene segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C H; Rosenfeld, A

    1983-03-01

    Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.

  1. Estimation of strong ground motion

    International Nuclear Information System (INIS)

    Watabe, Makoto

    1993-01-01

    Fault model has been developed to estimate a strong ground motion in consideration of characteristics of seismic source and propagation path of seismic waves. There are two different approaches in the model. The first one is a theoretical approach, while the second approach is a semi-empirical approach. Though the latter is more practical than the former to be applied to the estimation of input motions, it needs at least the small-event records, the value of the seismic moment of the small event and the fault model of the large event

  2. Multicollinearity and maximum entropy leuven estimator

    OpenAIRE

    Sudhanshu Mishra

    2004-01-01

    Multicollinearity is a serious problem in applied regression analysis. Q. Paris (2001) introduced the MEL estimator to resolve the multicollinearity problem. This paper improves the MEL estimator to the Modular MEL (MMEL) estimator and shows by Monte Carlo experiments that MMEL estimator performs significantly better than OLS as well as MEL estimators.

  3. Unrecorded Alcohol Consumption: Quantitative Methods of Estimation

    OpenAIRE

    Razvodovsky, Y. E.

    2010-01-01

    unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.

  4. Collider Scaling and Cost Estimation

    International Nuclear Information System (INIS)

    Palmer, R.B.

    1986-01-01

    This paper deals with collider cost and scaling. The main points of the discussion are the following ones: 1) scaling laws and cost estimation: accelerating gradient requirements, total stored RF energy considerations, peak power consideration, average power consumption; 2) cost optimization; 3) Bremsstrahlung considerations; 4) Focusing optics: conventional, laser focusing or super disruption. 13 refs

  5. Helicopter Toy and Lift Estimation

    Science.gov (United States)

    Shakerin, Said

    2013-01-01

    A $1 plastic helicopter toy (called a Wacky Whirler) can be used to demonstrate lift. Students can make basic measurements of the toy, use reasonable assumptions and, with the lift formula, estimate the lift, and verify that it is sufficient to overcome the toy's weight. (Contains 1 figure.)

  6. Estimation of potential uranium resources

    International Nuclear Information System (INIS)

    Curry, D.L.

    1977-09-01

    Potential estimates, like reserves, are limited by the information on hand at the time and are not intended to indicate the ultimate resources. Potential estimates are based on geologic judgement, so their reliability is dependent on the quality and extent of geologic knowledge. Reliability differs for each of the three potential resource classes. It is greatest for probable potential resources because of the greater knowledge base resulting from the advanced stage of exploration and development in established producing districts where most of the resources in this class are located. Reliability is least for speculative potential resources because no significant deposits are known, and favorability is inferred from limited geologic data. Estimates of potential resources are revised as new geologic concepts are postulated, as new types of uranium ore bodies are discovered, and as improved geophysical and geochemical techniques are developed and applied. Advances in technology that permit the exploitation of deep or low-grade deposits, or the processing of ores of previously uneconomic metallurgical types, also will affect the estimates

  7. An Improved Cluster Richness Estimator

    Energy Technology Data Exchange (ETDEWEB)

    Rozo, Eduardo; /Ohio State U.; Rykoff, Eli S.; /UC, Santa Barbara; Koester, Benjamin P.; /Chicago U. /KICP, Chicago; McKay, Timothy; /Michigan U.; Hao, Jiangang; /Michigan U.; Evrard, August; /Michigan U.; Wechsler, Risa H.; /SLAC; Hansen, Sarah; /Chicago U. /KICP, Chicago; Sheldon, Erin; /New York U.; Johnston, David; /Houston U.; Becker, Matthew R.; /Chicago U. /KICP, Chicago; Annis, James T.; /Fermilab; Bleem, Lindsey; /Chicago U.; Scranton, Ryan; /Pittsburgh U.

    2009-08-03

    Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L{sub X}-richness relation, from {sigma}{sub lnL{sub X}}{sup 2} = (0.86 {+-} 0.02){sup 2} to {sigma}{sub lnL{sub X}}{sup 2} = (0.69 {+-} 0.02){sup 2}. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L{sub X}-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.

  8. Estimation of Bridge Reliability Distributions

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...

  9. Estimation of Motion Vector Fields

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    1993-01-01

    This paper presents an approach to the estimation of 2-D motion vector fields from time varying image sequences. We use a piecewise smooth model based on coupled vector/binary Markov random fields. We find the maximum a posteriori solution by simulated annealing. The algorithm generate sample...... fields by means of stochastic relaxation implemented via the Gibbs sampler....

  10. Multispacecraft current estimates at swarm

    DEFF Research Database (Denmark)

    Dunlop, M. W.; Yang, Y.-Y.; Yang, J.-Y.

    2015-01-01

    During the first several months of the three-spacecraft Swarm mission all three spacecraft camerepeatedly into close alignment, providing an ideal opportunity for validating the proposed dual-spacecraftmethod for estimating current density from the Swarm magnetic field data. Two of the Swarm...

  11. Estimating Swedish biomass energy supply

    International Nuclear Information System (INIS)

    Johansson, J.; Lundqvist, U.

    1999-01-01

    Biomass is suggested to supply an increasing amount of energy in Sweden. There have been several studies estimating the potential supply of biomass energy, including that of the Swedish Energy Commission in 1995. The Energy Commission based its estimates of biomass supply on five other analyses which presented a wide variation in estimated future supply, in large part due to differing assumptions regarding important factors. In this paper, these studies are assessed, and the estimated potential biomass energy supplies are discusses regarding prices, technical progress and energy policy. The supply of logging residues depends on the demand for wood products and is limited by ecological, technological, and economic restrictions. The supply of stemwood from early thinning for energy and of straw from cereal and oil seed production is mainly dependent upon economic considerations. One major factor for the supply of willow and reed canary grass is the size of arable land projected to be not needed for food and fodder production. Future supply of biomass energy depends on energy prices and technical progress, both of which are driven by energy policy priorities. Biomass energy has to compete with other energy sources as well as with alternative uses of biomass such as forest products and food production. Technical progress may decrease the costs of biomass energy and thus increase the competitiveness. Economic instruments, including carbon taxes and subsidies, and allocation of research and development resources, are driven by energy policy goals and can change the competitiveness of biomass energy

  12. Estimates of wildland fire emissions

    Science.gov (United States)

    Yongqiang Liu; John J. Qu; Wanting Wang; Xianjun Hao

    2013-01-01

    Wildland fire missions can significantly affect regional and global air quality, radiation, climate, and the carbon cycle. A fundamental and yet challenging prerequisite to understanding the environmental effects is to accurately estimate fire emissions. This chapter describes and analyzes fire emission calculations. Various techniques (field measurements, empirical...

  13. State Estimation for Tensegrity Robots

    Science.gov (United States)

    Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas

    2016-01-01

    Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.

  14. Fuel Estimation Using Dynamic Response

    National Research Council Canada - National Science Library

    Hines, Michael S

    2007-01-01

    ...?s simulated satellite (SimSAT) to known control inputs. With an iterative process, the moment of inertia of SimSAT about the yaw axis was estimated by matching a model of SimSAT to the measured angular rates...

  15. Empirical estimates of the NAIRU

    DEFF Research Database (Denmark)

    Madsen, Jakob Brøchner

    2005-01-01

    equations. In this paper it is shown that a high proportion of the constant term is a statistical artefact and suggests a new method which yields approximately unbiased estimates of the time-invariant NAIRU. Using data for OECD countries it is shown that the constant-term correction lowers the unadjusted...

  16. Online Wavelet Complementary velocity Estimator.

    Science.gov (United States)

    Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin

    2018-02-01

    In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Load Estimation from Modal Parameters

    DEFF Research Database (Denmark)

    Aenlle, Manuel López; Brincker, Rune; Fernández, Pelayo Fernández

    2007-01-01

    In Natural Input Modal Analysis the modal parameters are estimated just from the responses while the loading is not recorded. However, engineers are sometimes interested in knowing some features of the loading acting on a structure. In this paper, a procedure to determine the loading from a FRF m...

  18. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  19. Software Cost-Estimation Model

    Science.gov (United States)

    Tausworthe, R. C.

    1985-01-01

    Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.

  20. Correlation Dimension Estimation for Classification

    Czech Academy of Sciences Publication Activity Database

    Jiřina, Marcel; Jiřina jr., M.

    2006-01-01

    Roč. 1, č. 3 (2006), s. 547-557 ISSN 1895-8648 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : correlation dimension * probability density estimation * classification * UCI MLR Subject RIV: BA - General Mathematics

  1. Molecular pathology and age estimation.

    Science.gov (United States)

    Meissner, Christoph; Ritz-Timme, Stefanie

    2010-12-15

    Over the course of our lifetime a stochastic process leads to gradual alterations of biomolecules on the molecular level, a process that is called ageing. Important changes are observed on the DNA-level as well as on the protein level and are the cause and/or consequence of our 'molecular clock', influenced by genetic as well as environmental parameters. These alterations on the molecular level may aid in forensic medicine to estimate the age of a living person, a dead body or even skeletal remains for identification purposes. Four such important alterations have become the focus of molecular age estimation in the forensic community over the last two decades. The age-dependent accumulation of the 4977bp deletion of mitochondrial DNA and the attrition of telomeres along with ageing are two important processes at the DNA-level. Among a variety of protein alterations, the racemisation of aspartic acid and advanced glycation endproducs have already been tested for forensic applications. At the moment the racemisation of aspartic acid represents the pinnacle of molecular age estimation for three reasons: an excellent standardization of sampling and methods, an evaluation of different variables in many published studies and highest accuracy of results. The three other mentioned alterations often lack standardized procedures, published data are sparse and often have the character of pilot studies. Nevertheless it is important to evaluate molecular methods for their suitability in forensic age estimation, because supplementary methods will help to extend and refine accuracy and reliability of such estimates. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  2. 23 CFR 635.115 - Agreement estimate.

    Science.gov (United States)

    2010-04-01

    ... CONSTRUCTION AND MAINTENANCE Contract Procedures § 635.115 Agreement estimate. (a) Following the award of contract, an agreement estimate based on the contract unit prices and estimated quantities shall be...

  3. On semiautomatic estimation of surface area

    DEFF Research Database (Denmark)

    Dvorak, J.; Jensen, Eva B. Vedel

    2013-01-01

    and the surfactor. For ellipsoidal particles, it is shown that the flower estimator is equal to the pivotal estimator based on support function measurements along four perpendicular rays. This result makes the pivotal estimator a powerful approximation to the flower estimator. In a simulation study of prolate....... If the segmentation is correct the estimate is computed automatically, otherwise the expert performs the necessary measurements manually. In case of convex particles we suggest to base the semiautomatic estimation on the so-called flower estimator, a new local stereological estimator of particle surface area....... For convex particles, the estimator is equal to four times the area of the support set (flower set) of the particle transect. We study the statistical properties of the flower estimator and compare its performance to that of two discretizations of the flower estimator, namely the pivotal estimator...

  4. Estimating sediment discharge: Appendix D

    Science.gov (United States)

    Gray, John R.; Simões, Francisco J. M.

    2008-01-01

    Sediment-discharge measurements usually are available on a discrete or periodic basis. However, estimates of sediment transport often are needed for unmeasured periods, such as when daily or annual sediment-discharge values are sought, or when estimates of transport rates for unmeasured or hypothetical flows are required. Selected methods for estimating suspended-sediment, bed-load, bed- material-load, and total-load discharges have been presented in some detail elsewhere in this volume. The purposes of this contribution are to present some limitations and potential pitfalls associated with obtaining and using the requisite data and equations to estimate sediment discharges and to provide guidance for selecting appropriate estimating equations. Records of sediment discharge are derived from data collected with sufficient frequency to obtain reliable estimates for the computational interval and period. Most sediment- discharge records are computed at daily or annual intervals based on periodically collected data, although some partial records represent discrete or seasonal intervals such as those for flood periods. The method used to calculate sediment- discharge records is dependent on the types and frequency of available data. Records for suspended-sediment discharge computed by methods described by Porterfield (1972) are most prevalent, in part because measurement protocols and computational techniques are well established and because suspended sediment composes the bulk of sediment dis- charges for many rivers. Discharge records for bed load, total load, or in some cases bed-material load plus wash load are less common. Reliable estimation of sediment discharges presupposes that the data on which the estimates are based are comparable and reliable. Unfortunately, data describing a selected characteristic of sediment were not necessarily derived—collected, processed, analyzed, or interpreted—in a consistent manner. For example, bed-load data collected with

  5. Estimating Foreign Exchange Reserve Adequacy

    Directory of Open Access Journals (Sweden)

    Abdul Hakim

    2013-04-01

    Full Text Available Accumulating foreign exchange reserves, despite their cost and their impacts on other macroeconomics variables, provides some benefits. This paper models such foreign exchange reserves. To measure the adequacy of foreign exchange reserves for import, it uses total reserves-to-import ratio (TRM. The chosen independent variables are gross domestic product growth, exchange rates, opportunity cost, and a dummy variable separating the pre and post 1997 Asian financial crisis. To estimate the risky TRM value, this paper uses conditional Value-at-Risk (VaR, with the help of Glosten-Jagannathan-Runkle (GJR model to estimate the conditional volatility. The results suggest that all independent variables significantly influence TRM. They also suggest that the short and long run volatilities are evident, with the additional evidence of asymmetric effects of negative and positive past shocks. The VaR, which are calculated assuming both normal and t distributions, provide similar results, namely violations in 2005 and 2008.

  6. Organ volume estimation using SPECT

    CERN Document Server

    Zaidi, H

    1996-01-01

    Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang's algorithm. The dual-window method was used for scatter subtraction. We used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of 1) fixed thresholding, 2) automatic thresholding, 3) attenuation, 4) scatter, and 5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are perform...

  7. Comments on mutagenesis risk estimation

    International Nuclear Information System (INIS)

    Russell, W.L.

    1976-01-01

    Several hypotheses and concepts have tended to oversimplify the problem of mutagenesis and can be misleading when used for genetic risk estimation. These include: the hypothesis that radiation-induced mutation frequency depends primarily on the DNA content per haploid genome, the extension of this concept to chemical mutagenesis, the view that, since DNA is DNA, mutational effects can be expected to be qualitatively similar in all organisms, the REC unit, and the view that mutation rates from chronic irradiation can be theoretically and accurately predicted from acute irradiation data. Therefore, direct determination of frequencies of transmitted mutations in mammals continues to be important for risk estimation, and the specific-locus method in mice is shown to be not as expensive as is commonly supposed for many of the chemical testing requirements

  8. Bayesian estimation in homodyne interferometry

    International Nuclear Information System (INIS)

    Olivares, Stefano; Paris, Matteo G A

    2009-01-01

    We address phase-shift estimation by means of squeezed vacuum probe and homodyne detection. We analyse Bayesian estimator, which is known to asymptotically saturate the classical Cramer-Rao bound to the variance, and discuss convergence looking at the a posteriori distribution as the number of measurements increases. We also suggest two feasible adaptive methods, acting on the squeezing parameter and/or the homodyne local oscillator phase, which allow us to optimize homodyne detection and approach the ultimate bound to precision imposed by the quantum Cramer-Rao theorem. The performances of our two-step methods are investigated by means of Monte Carlo simulated experiments with a small number of homodyne data, thus giving a quantitative meaning to the notion of asymptotic optimality.

  9. Parameter estimation and inverse problems

    CERN Document Server

    Aster, Richard C; Thurber, Clifford H

    2005-01-01

    Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...

  10. Cost Estimates and Investment Decisions

    International Nuclear Information System (INIS)

    Emhjellen, Kjetil; Emhjellen Magne; Osmundsen, Petter

    2001-08-01

    When evaluating new investment projects, oil companies traditionally use the discounted cashflow method. This method requires expected cashflows in the numerator and a risk adjusted required rate of return in the denominator in order to calculate net present value. The capital expenditure (CAPEX) of a project is one of the major cashflows used to calculate net present value. Usually the CAPEX is given by a single cost figure, with some indication of its probability distribution. In the oil industry and many other industries, it is common practice to report a CAPEX that is the estimated 50/50 (median) CAPEX instead of the estimated expected (expected value) CAPEX. In this article we demonstrate how the practice of using a 50/50 (median) CAPEX, when the cost distributions are asymmetric, causes project valuation errors and therefore may lead to wrong investment decisions with acceptance of projects that have negative net present values. (author)

  11. Location Estimation using Delayed Measurements

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Nørgård, Peter Magnus

    1998-01-01

    When combining data from various sensors it is vital to acknowledge possible measurement delays. Furthermore, the sensor fusion algorithm, often a Kalman filter, should be modified in order to handle the delay. The paper examines different possibilities for handling delays and applies a new techn...... technique to a sensor fusion system for estimating the location of an autonomous guided vehicle. The system fuses encoder and vision measurements in an extended Kalman filter. Results from experiments in a real environment are reported...

  12. Prior information in structure estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav; Nedoma, Petr; Khailova, Natalia; Pavelková, Lenka

    2003-01-01

    Roč. 150, č. 6 (2003), s. 643-653 ISSN 1350-2379 R&D Projects: GA AV ČR IBS1075102; GA AV ČR IBS1075351; GA ČR GA102/03/0049 Institutional research plan: CEZ:AV0Z1075907 Keywords : prior knowledge * structure estimation * autoregressive models Subject RIV: BC - Control Systems Theory Impact factor: 0.745, year: 2003 http://library.utia.cas.cz/separaty/historie/karny-0411258.pdf

  13. Radiation in space: risk estimates

    International Nuclear Information System (INIS)

    Fry, R.J.M.

    2002-01-01

    The complexity of radiation environments in space makes estimation of risks more difficult than for the protection of terrestrial population. In deep space the duration of the mission, position of the solar cycle, number and size of solar particle events (SPE) and the spacecraft shielding are the major determinants of risk. In low-earth orbit missions there are the added factors of altitude and orbital inclination. Different radiation qualities such as protons and heavy ions and secondary radiations inside the spacecraft such as neutrons of various energies, have to be considered. Radiation dose rates in space are low except for short periods during very large SPEs. Risk estimation for space activities is based on the human experience of exposure to gamma rays and to a lesser extent X rays. The doses of protons, heavy ions and neutrons are adjusted to take into account the relative biological effectiveness (RBE) of the different radiation types and thus derive equivalent doses. RBE values and factors to adjust for the effect of dose rate have to be obtained from experimental data. The influence of age and gender on the cancer risk is estimated from the data from atomic bomb survivors. Because of the large number of variables the uncertainties in the probability of the effects are large. Information needed to improve the risk estimates includes: (1) risk of cancer induction by protons, heavy ions and neutrons; (2) influence of dose rate and protraction, particularly on potential tissue effects such as reduced fertility and cataracts; and (3) possible effects of heavy ions on the central nervous system. Risk cannot be eliminated and thus there must be a consensus on what level of risk is acceptable. (author)

  14. Properties of estimated characteristic roots

    OpenAIRE

    Bent Nielsen; Heino Bohn Nielsen

    2008-01-01

    Estimated characteristic roots in stationary autoregressions are shown to give rather noisy information about their population equivalents. This is remarkable given the central role of the characteristic roots in the theory of autoregressive processes. In the asymptotic analysis the problems appear when multiple roots are present as this implies a non-differentiablity so the δ-method does not apply, convergence rates are slow, and the asymptotic distribution is non-normal. In finite samples ...

  15. Recent estimates of capital flight

    OpenAIRE

    Claessens, Stijn; Naude, David

    1993-01-01

    Researchers and policymakers have in recent years paid considerable attention to the phenomenon of capital flight. Researchers have focused on four questions: What concept should be used to measure capital flight? What figure for capital flight will emerge, using this measure? Can the occurrence and magnitude of capital flight be explained by certain (economic) variables? What policy changes can be useful to reverse capital flight? The authors focus strictly on presenting estimates of capital...

  16. Effort Estimation in BPMS Migration

    OpenAIRE

    Drews, Christopher; Lantow, Birger

    2018-01-01

    Usually Business Process Management Systems (BPMS) are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation re...

  17. Reactor core performance estimating device

    International Nuclear Information System (INIS)

    Tanabe, Akira; Yamamoto, Toru; Shinpuku, Kimihiro; Chuzen, Takuji; Nishide, Fusayo.

    1995-01-01

    The present invention can autonomously simplify a neural net model thereby enabling to conveniently estimate various amounts which represents reactor core performances by a simple calculation in a short period of time. Namely, a reactor core performance estimation device comprises a nerve circuit net which divides the reactor core into a large number of spacial regions, and receives various physical amounts for each region as input signals for input nerve cells and outputs estimation values of each amount representing the reactor core performances as output signals of output nerve cells. In this case, the nerve circuit net (1) has a structure of extended multi-layered model having direct coupling from an upper stream layer to each of downstream layers, (2) has a forgetting constant q in a corrected equation for a joined load value ω using an inverse error propagation method, (3) learns various amounts representing reactor core performances determined using the physical models as teacher signals, (4) determines the joined load value ω decreased as '0' when it is to less than a predetermined value upon learning described above, and (5) eliminates elements of the nerve circuit net having all of the joined load value decreased to 0. As a result, the neural net model comprises an autonomously simplifying means. (I.S.)

  18. Contact Estimation in Robot Interaction

    Directory of Open Access Journals (Sweden)

    Filippo D'Ippolito

    2014-07-01

    Full Text Available In the paper, safety issues are examined in a scenario in which a robot manipulator and a human perform the same task in the same workspace. During the task execution, the human should be able to physically interact with the robot, and in this case an estimation algorithm for both interaction forces and a contact point is proposed in order to guarantee safety conditions. The method, starting from residual joint torque estimation, allows both direct and adaptive computation of the contact point and force, based on a principle of equivalence of the contact forces. At the same time, all the unintended contacts must be avoided, and a suitable post-collision strategy is considered to move the robot away from the collision area or else to reduce impact effects. Proper experimental tests have demonstrated the applicability in practice of both the post-impact strategy and the estimation algorithms; furthermore, experiments demonstrate the different behaviour resulting from the adaptation of the contact point as opposed to direct calculation.

  19. Abundance estimation and conservation biology

    Science.gov (United States)

    Nichols, J.D.; MacKenzie, D.I.

    2004-01-01

    Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001). The initial capture–recapture models developed for partially (Darroch, 1959) and completely (Jolly, 1965; Seber, 1965) open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992), and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993). However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001). The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004) is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004) emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004) also suggest that our attention

  20. Abundance estimation and Conservation Biology

    Directory of Open Access Journals (Sweden)

    Nichols, J. D.

    2004-06-01

    Full Text Available Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001. The initial capture–recapture models developed for partially (Darroch, 1959 and completely (Jolly, 1965; Seber, 1965 open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992, and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993. However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001. The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004 is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004 emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004 also suggest that

  1. Estimating the Costs of Preventive Interventions

    Science.gov (United States)

    Foster, E. Michael; Porter, Michele M.; Ayers, Tim S.; Kaplan, Debra L.; Sandler, Irwin

    2007-01-01

    The goal of this article is to improve the practice and reporting of cost estimates of prevention programs. It reviews the steps in estimating the costs of an intervention and the principles that should guide estimation. The authors then review prior efforts to estimate intervention costs using a sample of well-known but diverse studies. Finally,…

  2. Thermodynamics and life span estimation

    International Nuclear Information System (INIS)

    Kuddusi, Lütfullah

    2015-01-01

    In this study, the life span of people living in seven regions of Turkey is estimated by applying the first and second laws of thermodynamics to the human body. The people living in different regions of Turkey have different food habits. The first and second laws of thermodynamics are used to calculate the entropy generation rate per unit mass of a human due to the food habits. The lifetime entropy generation per unit mass of a human was previously found statistically. The two entropy generations, lifetime entropy generation and entropy generation rate, enable one to determine the life span of people living in seven regions of Turkey with different food habits. In order to estimate the life span, some statistics of Turkish Statistical Institute regarding the food habits of the people living in seven regions of Turkey are used. The life spans of people that live in Central Anatolia and Eastern Anatolia regions are the longest and shortest, respectively. Generally, the following inequality regarding the life span of people living in seven regions of Turkey is found: Eastern Anatolia < Southeast Anatolia < Black Sea < Mediterranean < Marmara < Aegean < Central Anatolia. - Highlights: • The first and second laws of thermodynamics are applied to the human body. • The entropy generation of a human due to his food habits is determined. • The life span of Turks is estimated by using the entropy generation method. • Food habits of a human have effect on his life span

  3. The estimation of genetic divergence

    Science.gov (United States)

    Holmquist, R.; Conroy, T.

    1981-01-01

    Consideration is given to the criticism of Nei and Tateno (1978) of the REH (random evolutionary hits) theory of genetic divergence in nucleic acids and proteins, and to their proposed alternative estimator of total fixed mutations designated X2. It is argued that the assumption of nonuniform amino acid or nucleotide substitution will necessarily increase REH estimates relative to those made for a model where each locus has an equal likelihood of fixing mutations, thus the resulting value will not be an overestimation. The relative values of X2 and measures calculated on the basis of the PAM and REH theories for the number of nucleotide substitutions necessary to explain a given number of observed amino acid differences between two homologous proteins are compared, and the smaller values of X2 are attributed to (1) a mathematical model based on the incorrect assumption that an entire structural gene is free to fix mutations and (2) the assumptions of different numbers of variable codons for the X2 and REH calculations. Results of a repeat of the computer simulations of Nei and Tateno are presented which, in contrast to the original results, confirm the REH theory. It is pointed out that while a negative correlation is observed between estimations of the fixation intensity per varion and the number of varions for a given pair of sequences, the correlation between the two fixation intensities and varion numbers of two different pairs of sequences need not be negative. Finally, REH theory is used to resolve a paradox concerning the high rate of covarion turnover and the nature of general function sites as permanent covarions.

  4. Nonparametric e-Mixture Estimation.

    Science.gov (United States)

    Takano, Ken; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru

    2016-12-01

    This study considers the common situation in data analysis when there are few observations of the distribution of interest or the target distribution, while abundant observations are available from auxiliary distributions. In this situation, it is natural to compensate for the lack of data from the target distribution by using data sets from these auxiliary distributions-in other words, approximating the target distribution in a subspace spanned by a set of auxiliary distributions. Mixture modeling is one of the simplest ways to integrate information from the target and auxiliary distributions in order to express the target distribution as accurately as possible. There are two typical mixtures in the context of information geometry: the [Formula: see text]- and [Formula: see text]-mixtures. The [Formula: see text]-mixture is applied in a variety of research fields because of the presence of the well-known expectation-maximazation algorithm for parameter estimation, whereas the [Formula: see text]-mixture is rarely used because of its difficulty of estimation, particularly for nonparametric models. The [Formula: see text]-mixture, however, is a well-tempered distribution that satisfies the principle of maximum entropy. To model a target distribution with scarce observations accurately, this letter proposes a novel framework for a nonparametric modeling of the [Formula: see text]-mixture and a geometrically inspired estimation algorithm. As numerical examples of the proposed framework, a transfer learning setup is considered. The experimental results show that this framework works well for three types of synthetic data sets, as well as an EEG real-world data set.

  5. Dose estimation by biological methods

    International Nuclear Information System (INIS)

    Guerrero C, C.; David C, L.; Serment G, J.; Brena V, M.

    1997-01-01

    The human being is exposed to strong artificial radiation sources, mainly of two forms: the first is referred to the occupationally exposed personnel (POE) and the second, to the persons that require radiological treatment. A third form less common is by accidents. In all these conditions it is very important to estimate the absorbed dose. The classical biological dosimetry is based in the dicentric analysis. The present work is part of researches to the process to validate the In situ Fluorescent hybridation (FISH) technique which allows to analyse the aberrations on the chromosomes. (Author)

  6. Stochastic estimation of electricity consumption

    International Nuclear Information System (INIS)

    Kapetanovic, I.; Konjic, T.; Zahirovic, Z.

    1999-01-01

    Electricity consumption forecasting represents a part of the stable functioning of the power system. It is very important because of rationality and increase of control process efficiency and development planning of all aspects of society. On a scientific basis, forecasting is a possible way to solve problems. Among different models that have been used in the area of forecasting, the stochastic aspect of forecasting as a part of quantitative models takes a very important place in applications. ARIMA models and Kalman filter as stochastic estimators have been treated together for electricity consumption forecasting. Therefore, the main aim of this paper is to present the stochastic forecasting aspect using short time series. (author)

  7. Size Estimates in Inverse Problems

    KAUST Repository

    Di Cristo, Michele

    2014-01-06

    Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.

  8. Location Estimation of Mobile Devices

    Directory of Open Access Journals (Sweden)

    Kamil ŽIDEK

    2009-06-01

    Full Text Available This contribution describes mathematical model (kinematics for Mobile Robot carriage. The mathematical model is fully parametric. Model is designed universally for any measures three or four wheeled carriage. The next conditions are: back wheels are driving-wheel, front wheels change angle of Robot turning. Position of the front wheel gives the actual position of the robot. Position of the robot is described by coordinates x, y and by angle of the front wheel α in reference position. Main reason for model implementation is indoor navigation. We need some estimation of robot position especially after turning of the Robot. Next use is for outdoor navigation especially for precising GPS information.

  9. Estimation of the energy needs

    International Nuclear Information System (INIS)

    Ailleret

    1955-01-01

    The present report draws up the balance on the present and estimable energy consumption for the next twenty years. The present energy comes mainly of the consumption of coal, oil products and essentially hydraulic electric energy. the market development comes essentially of the development the industrial activity and of new applications tributary of the cost and the distribution of the electric energy. To this effect, the atomic energy offers good industrial perspectives in complement of the energy present resources in order to answer the new needs. (M.B.) [fr

  10. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  11. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    1997-01-01

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  12. Applied parameter estimation for chemical engineers

    CERN Document Server

    Englezos, Peter

    2000-01-01

    Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam

  13. Graph Sampling for Covariance Estimation

    KAUST Repository

    Chepuri, Sundeep Prabhakar

    2017-04-25

    In this paper the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the non-parametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed theory.

  14. Note on demographic estimates 1979.

    Science.gov (United States)

    1979-01-01

    Based on UN projections, national projections, and the South Pacific Commission data, the ESCAP Population Division has compiled estimates of the 1979 population and demogaphic figures for the 38 member countries and associate members. The 1979 population is estimated at 2,400 million, 55% of the world total of 4,336 million. China comprises 39% of the region, India, 28%. China, India, Indonesia, Japan, Bangladesh, and Pakistan comprise 6 of the 10 largest countries in the world. China and India are growing at the rate of 1 million people per month. Between 1978-9 Hong Kong experienced the highest rate of growth, 6.2%, Niue the lowest, 4.5%. Life expectancy at birth is 58.7 years in the ESCAP region, but is over 70 in Japan, Hong Kong, Australia, New Zealand, and Singapore. At 75.2 years life expectancy in Japan is highest in the world. By world standards, a high percentage of females aged 16-64 are economically active. More than half the women aged 15-64 are in the labor force in 10 of the ESCAP countries. The region is still 73% rural. By the end of the 20th century the population of the ESCAP region is projected at 3,272 million, a 36% increase over the 1979 total.

  15. Practical global oceanic state estimation

    Science.gov (United States)

    Wunsch, Carl; Heimbach, Patrick

    2007-06-01

    The problem of oceanographic state estimation, by means of an ocean general circulation model (GCM) and a multitude of observations, is described and contrasted with the meteorological process of data assimilation. In practice, all such methods reduce, on the computer, to forms of least-squares. The global oceanographic problem is at the present time focussed primarily on smoothing, rather than forecasting, and the data types are unlike meteorological ones. As formulated in the consortium Estimating the Circulation and Climate of the Ocean (ECCO), an automatic differentiation tool is used to calculate the so-called adjoint code of the GCM, and the method of Lagrange multipliers used to render the problem one of unconstrained least-squares minimization. Major problems today lie less with the numerical algorithms (least-squares problems can be solved by many means) than with the issues of data and model error. Results of ongoing calculations covering the period of the World Ocean Circulation Experiment, and including among other data, satellite altimetry from TOPEX/POSEIDON, Jason-1, ERS- 1/2, ENVISAT, and GFO, a global array of profiling floats from the Argo program, and satellite gravity data from the GRACE mission, suggest that the solutions are now useful for scientific purposes. Both methodology and applications are developing in a number of different directions.

  16. LOD estimation from DORIS observations

    Science.gov (United States)

    Stepanek, Petr; Filler, Vratislav; Buday, Michal; Hugentobler, Urs

    2016-04-01

    The difference between astronomically determined duration of the day and 86400 seconds is called length of day (LOD). The LOD could be also understood as the daily rate of the difference between the Universal Time UT1, based on the Earth rotation, and the International Atomic Time TAI. The LOD is estimated using various Satellite Geodesy techniques as GNSS and SLR, while absolute UT1-TAI difference is precisely determined by VLBI. Contrary to other IERS techniques, the LOD estimation using DORIS (Doppler Orbitography and Radiopositioning Integrated by satellite) measurement did not achieve a geodetic accuracy in the past, reaching the precision at the level of several ms per day. However, recent experiments performed by IDS (International DORIS Service) analysis centre at Geodetic Observatory Pecny show a possibility to reach accuracy around 0.1 ms per day, when not adjusting the cross-track harmonics in the Satellite orbit model. The paper presents the long term LOD series determined from the DORIS solutions. The series are compared with C04 as the reference. Results are discussed in the context of accuracy achieved with GNSS and SLR. Besides the multi-satellite DORIS solutions, also the LOD series from the individual DORIS satellite solutions are analysed.

  17. CONSTRUCTING ACCOUNTING UNCERTAINITY ESTIMATES VARIABLE

    Directory of Open Access Journals (Sweden)

    Nino Serdarevic

    2012-10-01

    Full Text Available This paper presents research results on the BIH firms’ financial reporting quality, utilizing empirical relation between accounting conservatism, generated in created critical accounting policy choices, and management abilities in estimates and prediction power of domicile private sector accounting. Primary research is conducted based on firms’ financial statements, constructing CAPCBIH (Critical Accounting Policy Choices relevant in B&H variable that presents particular internal control system and risk assessment; and that influences financial reporting positions in accordance with specific business environment. I argue that firms’ management possesses no relevant capacity to determine risks and true consumption of economic benefits, leading to creation of hidden reserves in inventories and accounts payable; and latent losses for bad debt and assets revaluations. I draw special attention to recent IFRS convergences to US GAAP, especially in harmonizing with FAS 130 Reporting comprehensive income (in revised IAS 1 and FAS 157 Fair value measurement. CAPCBIH variable, resulted in very poor performance, presents considerable lack of recognizing environment specifics. Furthermore, I underline the importance of revised ISAE and re-enforced role of auditors in assessing relevance of management estimates.

  18. The need to estimate risks

    International Nuclear Information System (INIS)

    Pochin, E.E.

    1980-01-01

    In an increasing number of situations, it is becoming possible to obtain and compare numerical estimates of the biological risks involved in different alternative sources of action. In some cases these risks are similar in kind, as for example when the risk of including fatal cancer of the breast or stomach by x-ray screening of a population at risk, is compared with the risk of such cancers proving fatal if not detected by a screening programme. In other cases in which it is important to attempt a comparison, the risks are dissimilar in type, as when the safety of occupations involving exposure to radiation or chemical carcinogens is compared with that of occupations in which the major risks are from lung disease or from accidental injury and death. Similar problems of assessing the relative severity of unlike effects occur in any attempt to compare the total biological harm associated with a given output of electricity derived from different primary fuel sources, with its contributions both of occupation and of public harm. In none of these instances is the numerical frequency of harmful effects alone an adequate measure of total biological detriment, nor is such detriment the only factor which should influence decisions. Estimations of risk appear important however, since otherwise public health decisions are likely to be made on more arbitrary grounds, and public opinion will continue to be affected predominantly by the type rather than also by the size of risk. (author)

  19. Variance function estimation for immunoassays

    International Nuclear Information System (INIS)

    Raab, G.M.; Thompson, R.; McKenzie, I.

    1980-01-01

    A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)

  20. Information and crystal structure estimation

    International Nuclear Information System (INIS)

    Wilkins, S.W.; Commonwealth Scientific and Industrial Research Organization, Clayton; Varghese, J.N.; Steenstrup, S.

    1984-01-01

    The conceptual foundations of a general information-theoretic based approach to X-ray structure estimation are reexamined with a view to clarifying some of the subtleties inherent in the approach and to enhancing the scope of the method. More particularly, general reasons for choosing the minimum of the Shannon-Kullback measure for information as the criterion for inference are discussed and it is shown that the minimum information (or maximum entropy) principle enters the present treatment of the structure estimation problem in at least to quite separate ways, and that three formally similar but conceptually quite different expressions for relative information appear at different points in the theory. One of these is the general Shannon-Kullback expression, while the second is a derived form pertaining only under the restrictive assumptions of the present stochastic model for allowed structures, and the third is a measure of the additional information involved in accepting a fluctuation relative to an arbitrary mean structure. (orig.)

  1. PHAZE, Parametric Hazard Function Estimation

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking of the model assumptions. 2 - Methods: PHAZE assumes that the failures of a component follow a time-dependent (or non-homogenous) Poisson process and that the failure counts in non-overlapping time intervals are independent. Implicit in the independence property is the assumption that the component is restored to service immediately after any failure, with negligible repair time. The failures of one component are assumed to be independent of those of another component; a proportional hazards model is used. Data for a component are called time censored if the component is observed for a fixed time-period, or plant records covering a fixed time-period are examined, and the failure times are recorded. The number of these failures is random. Data are called failure censored if the component is kept in service until a predetermined number of failures has occurred, at which time the component is removed from service. In this case, the number of failures is fixed, but the end of the observation period equals the final failure time and is random. A typical PHAZE session consists of reading failure data from a file prepared previously, selecting one of the three models, and performing data analysis (i.e., performing the usual statistical inference about the parameters of the model, with special emphasis on the parameter(s) that determine whether the hazard function is increasing). The final goals of the inference are a point estimate

  2. Bayesian estimation methods in metrology

    International Nuclear Information System (INIS)

    Cox, M.G.; Forbes, A.B.; Harris, P.M.

    2004-01-01

    In metrology -- the science of measurement -- a measurement result must be accompanied by a statement of its associated uncertainty. The degree of validity of a measurement result is determined by the validity of the uncertainty statement. In recognition of the importance of uncertainty evaluation, the International Standardization Organization in 1995 published the Guide to the Expression of Uncertainty in Measurement and the Guide has been widely adopted. The validity of uncertainty statements is tested in interlaboratory comparisons in which an artefact is measured by a number of laboratories and their measurement results compared. Since the introduction of the Mutual Recognition Arrangement, key comparisons are being undertaken to determine the degree of equivalence of laboratories for particular measurement tasks. In this paper, we discuss the possible development of the Guide to reflect Bayesian approaches and the evaluation of key comparison data using Bayesian estimation methods

  3. Residual risk over-estimated

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    The way nuclear power plants are built practically excludes accidents with serious consequences. This is attended to by careful selection of material, control of fabrication and regular retesting as well as by several safety systems working independently. But the remaining risk, a 'hypothetic' uncontrollable incident with catastrophic effects is the main subject of the discussion on the peaceful utilization of nuclear power. The this year's 'Annual Meeting on Nuclear Engineering' in Mannheim and the meeting 'Reactor Safety Research' in Cologne showed, that risk studies so far were too pessimistic. 'Best estimate' calculations suggest that core melt-down accidents only occur if almost all safety systems fail, that accidents take place much more slowly, and that the release of radioactive fission products is by several magnitudes lower than it was assumed until now. (orig.) [de

  4. Neutron background estimates in GESA

    Directory of Open Access Journals (Sweden)

    Fernandes A.C.

    2014-01-01

    Full Text Available The SIMPLE project looks for nuclear recoil events generated by rare dark matter scattering interactions. Nuclear recoils are also produced by more prevalent cosmogenic neutron interactions. While the rock overburden shields against (μ,n neutrons to below 10−8 cm−2 s−1, it itself contributes via radio-impurities. Additional shielding of these is similar, both suppressing and contributing neutrons. We report on the Monte Carlo (MCNP estimation of the on-detector neutron backgrounds for the SIMPLE experiment located in the GESA facility of the Laboratoire Souterrain à Bas Bruit, and its use in defining additional shielding for measurements which have led to a reduction in the extrinsic neutron background to ∼ 5 × 10−3 evts/kgd. The calculated event rate induced by the neutron background is ∼ 0,3 evts/kgd, with a dominant contribution from the detector container.

  5. Mergers as an Omega estimator

    International Nuclear Information System (INIS)

    Carlberg, R.G.

    1990-01-01

    The redshift dependence of the fraction of galaxies which are merging or strongly interacting is a steep function of Omega and depends on the ratio of the cutoff velocity for interactions to the pairwise velocity dispersion. For typical galaxies the merger rate is shown to vary as (1 + z)exp m, where m is about 4.51 (Omega)exp 0.42, for Omega near 1 and a CDM-like cosmology. The index m has a relatively weak dependence on the maximum merger velocity, the mass of the galaxy, and the background cosmology, for small variations around a cosmology with a low redshift, z of about 2, of galaxy formation. Estimates of m from optical and IRAS galaxies have found that m is about 3-4, but with very large uncertainties. If quasar evolution follows the evolution of galaxy merging and m for quasars is greater than 4, then Omega is greater than 0.8. 21 refs

  6. 2007 Estimated International Energy Flows

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C A; Belles, R D; Simon, A J

    2011-03-10

    An energy flow chart or 'atlas' for 136 countries has been constructed from data maintained by the International Energy Agency (IEA) and estimates of energy use patterns for the year 2007. Approximately 490 exajoules (460 quadrillion BTU) of primary energy are used in aggregate by these countries each year. While the basic structure of the energy system is consistent from country to country, patterns of resource use and consumption vary. Energy can be visualized as it flows from resources (i.e. coal, petroleum, natural gas) through transformations such as electricity generation to end uses (i.e. residential, commercial, industrial, transportation). These flow patterns are visualized in this atlas of 136 country-level energy flow charts.

  7. Data Handling and Parameter Estimation

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist

    2016-01-01

    ,engineers, and professionals. However, it is also expected that they will be useful both for graduate teaching as well as a stepping stone for academic researchers who wish to expand their theoretical interest in the subject. For the models selected to interpret the experimental data, this chapter uses available models from...... literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii......Modelling is one of the key tools at the disposal of modern wastewater treatment professionals, researchers and engineers. It enables them to study and understand complex phenomena underlying the physical, chemical and biological performance of wastewater treatment plants at different temporal...

  8. Model for traffic emissions estimation

    Science.gov (United States)

    Alexopoulos, A.; Assimacopoulos, D.; Mitsoulis, E.

    A model is developed for the spatial and temporal evaluation of traffic emissions in metropolitan areas based on sparse measurements. All traffic data available are fully employed and the pollutant emissions are determined with the highest precision possible. The main roads are regarded as line sources of constant traffic parameters in the time interval considered. The method is flexible and allows for the estimation of distributed small traffic sources (non-line/area sources). The emissions from the latter are assumed to be proportional to the local population density as well as to the traffic density leading to local main arteries. The contribution of moving vehicles to air pollution in the Greater Athens Area for the period 1986-1988 is analyzed using the proposed model. Emissions and other related parameters are evaluated. Emissions from area sources were found to have a noticeable share of the overall air pollution.

  9. Effort Estimation in BPMS Migration

    Directory of Open Access Journals (Sweden)

    Christopher Drews

    2018-04-01

    Full Text Available Usually Business Process Management Systems (BPMS are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation regarding the technical aspects of BPMS migration. The framework provides questions for BPMS comparison and an effort evaluation schema. The applicability of the framework is evaluated based on a simplified BPMS migration scenario.

  10. Supplemental report on cost estimates'

    International Nuclear Information System (INIS)

    1992-01-01

    The Office of Management and Budget (OMB) and the U.S. Army Corps of Engineers have completed an analysis of the Department of Energy's (DOE) Fiscal Year (FY) 1993 budget request for its Environmental Restoration and Waste Management (ERWM) program. The results were presented to an interagency review group (IAG) of senior-Administration officials for their consideration in the budget process. This analysis included evaluations of the underlying legal requirements and cost estimates on which the ERWM budget request was based. The major conclusions are contained in a separate report entitled, ''Interagency Review of the Department of Energy Environmental Restoration and Waste Management Program.'' This Corps supplemental report provides greater detail on the cost analysis

  11. Age Estimation in Forensic Sciences

    Science.gov (United States)

    Alkass, Kanar; Buchholz, Bruce A.; Ohtani, Susumu; Yamamoto, Toshiharu; Druid, Henrik; Spalding, Kirsty L.

    2010-01-01

    Age determination of unknown human bodies is important in the setting of a crime investigation or a mass disaster because the age at death, birth date, and year of death as well as gender can guide investigators to the correct identity among a large number of possible matches. Traditional morphological methods used by anthropologists to determine age are often imprecise, whereas chemical analysis of tooth dentin, such as aspartic acid racemization, has shown reproducible and more precise results. In this study, we analyzed teeth from Swedish individuals using both aspartic acid racemization and radiocarbon methodologies. The rationale behind using radiocarbon analysis is that aboveground testing of nuclear weapons during the cold war (1955–1963) caused an extreme increase in global levels of carbon-14 (14C), which has been carefully recorded over time. Forty-four teeth from 41 individuals were analyzed using aspartic acid racemization analysis of tooth crown dentin or radiocarbon analysis of enamel, and 10 of these were split and subjected to both radiocarbon and racemization analysis. Combined analysis showed that the two methods correlated well (R2 = 0.66, p Aspartic acid racemization also showed a good precision with an overall absolute error of 5.4 ± 4.2 years. Whereas radiocarbon analysis gives an estimated year of birth, racemization analysis indicates the chronological age of the individual at the time of death. We show how these methods in combination can also assist in the estimation of date of death of an unidentified victim. This strategy can be of significant assistance in forensic casework involving dead victim identification. PMID:19965905

  12. Runoff estimation in residencial area

    Directory of Open Access Journals (Sweden)

    Meire Regina de Almeida Siqueira

    2013-12-01

    Full Text Available This study aimed to estimate the watershed runoff caused by extreme events that often result in the flooding of urban areas. The runoff of a residential area in the city of Guaratinguetá, São Paulo, Brazil was estimated using the Curve-Number method proposed by USDA-NRCS. The study also investigated current land use and land cover conditions, impermeable areas with pasture and indications of the reforestation of those areas. Maps and satellite images of Residential Riverside I Neighborhood were used to characterize the area. In addition to characterizing land use and land cover, the definition of the soil type infiltration capacity, the maximum local rainfall, and the type and quality of the drainage system were also investigated. The study showed that this neighborhood, developed in 1974, has an area of 792,700 m², a population of 1361 inhabitants, and a sloping area covered with degraded pasture (Guaratinguetá-Piagui Peak located in front of the residential area. The residential area is located in a flat area near the Paraiba do Sul River, and has a poor drainage system with concrete pipes, mostly 0.60 m in diameter, with several openings that capture water and sediments from the adjacent sloping area. The Low Impact Development (LID system appears to be a viable solution for this neighborhood drainage system. It can be concluded that the drainage system of the Guaratinguetá Riverside I Neighborhood has all of the conditions and characteristics that make it suitable for the implementation of a low impact urban drainage system. Reforestation of Guaratinguetá-Piagui Peak can reduce the basin’s runoff by 50% and minimize flooding problems in the Beira Rio neighborhood.

  13. Estimated status 2006-2015

    International Nuclear Information System (INIS)

    2003-01-01

    According to article 6 of the French law from February 10, 2000 relative to the modernization and development of the electric public utility, the manager of the public power transportation grid (RTE) has to produce, at least every two years and under the control of the French government, a pluri-annual estimated status. Then, the energy ministry uses this status to prepare the pluri-annual planning of power production investments. The estimated status aims at establishing a medium- and long-term diagnosis of the balance between power supply and demand and at evaluating the new production capacity needs to ensure a durable security of power supplies. The hypotheses relative to the power consumption and to the evolution of the power production means and trades are presented in chapters 2 to 4. Chapter 5 details the methodology and modeling principles retained for the supply-demand balance simulations. Chapter 6 presents the probabilistic simulation results at the 2006, 2010 and 2015 prospects and indicates the volumes of reinforcement of the production parks which would warrant an acceptable level of security. Chapter 7 develops the critical problem of winter demand peaks and evokes the possibilities linked with demand reduction, market resources and use of the existing park. Finally, chapter 8 makes a synthesis of the technical conclusions and recalls the determining hypotheses that have been retained. The particular situations of western France, of the Mediterranean and Paris region, and of Corsica and overseas territories are examined in chapter 9. The simulation results for all consumption-production scenarios and the wind-power production data are presented in appendixes. (J.S.)

  14. Estimating location without external cues.

    Directory of Open Access Journals (Sweden)

    Allen Cheung

    2014-10-01

    Full Text Available The ability to determine one's location is fundamental to spatial navigation. Here, it is shown that localization is theoretically possible without the use of external cues, and without knowledge of initial position or orientation. With only error-prone self-motion estimates as input, a fully disoriented agent can, in principle, determine its location in familiar spaces with 1-fold rotational symmetry. Surprisingly, localization does not require the sensing of any external cue, including the boundary. The combination of self-motion estimates and an internal map of the arena provide enough information for localization. This stands in conflict with the supposition that 2D arenas are analogous to open fields. Using a rodent error model, it is shown that the localization performance which can be achieved is enough to initiate and maintain stable firing patterns like those of grid cells, starting from full disorientation. Successful localization was achieved when the rotational asymmetry was due to the external boundary, an interior barrier or a void space within an arena. Optimal localization performance was found to depend on arena shape, arena size, local and global rotational asymmetry, and the structure of the path taken during localization. Since allothetic cues including visual and boundary contact cues were not present, localization necessarily relied on the fusion of idiothetic self-motion cues and memory of the boundary. Implications for spatial navigation mechanisms are discussed, including possible relationships with place field overdispersion and hippocampal reverse replay. Based on these results, experiments are suggested to identify if and where information fusion occurs in the mammalian spatial memory system.

  15. Estimation of Poverty in Small Areas

    Directory of Open Access Journals (Sweden)

    Agne Bikauskaite

    2014-12-01

    Full Text Available A qualitative techniques of poverty estimation is needed to better implement, monitor and determine national areas where support is most required. The problem of small area estimation (SAE is the production of reliable estimates in areas with small samples. The precision of estimates in strata deteriorates (i.e. the precision decreases when the standard deviation increases, if the sample size is smaller. In these cases traditional direct estimators may be not precise and therefore pointless. Currently there are many indirect methods for SAE. The purpose of this paper is to analyze several diff erent types of techniques which produce small area estimates of poverty.

  16. Robust DOA Estimation of Harmonic Signals Using Constrained Filters on Phase Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    In array signal processing, distances between receivers, e.g., microphones, cause time delays depending on the direction of arrival (DOA) of a signal source. We can then estimate the DOA from the time-difference of arrival (TDOA) estimates. However, many conventional DOA estimators based on TDOA...... estimates are not optimal in colored noise. In this paper, we estimate the DOA of a harmonic signal source from multi-channel phase estimates, which relate to narrowband TDOA estimates. More specifically, we design filters to apply on phase estimates to obtain a DOA estimate with minimum variance. Using...

  17. On the relation between S-Estimators and M-Estimators of multivariate location and covariance

    NARCIS (Netherlands)

    Lopuhaa, H.P.

    1987-01-01

    We discuss the relation between S-estimators and M-estimators of multivariate location and covariance. As in the case of the estimation of a multiple regression parameter, S-estimators are shown to satisfy first-order conditions of M-estimators. We show that the influence function IF (x;S F) of

  18. Estimation of the energy needs; Estimation des besoins energetiques

    Energy Technology Data Exchange (ETDEWEB)

    Ailleret, [Electricite de France (EDF), Dir. General des Etudes de Recherches, 75 - Paris (France)

    1955-07-01

    The present report draws up the balance on the present and estimable energy consumption for the next twenty years. The present energy comes mainly of the consumption of coal, oil products and essentially hydraulic electric energy. the market development comes essentially of the development the industrial activity and of new applications tributary of the cost and the distribution of the electric energy. To this effect, the atomic energy offers good industrial perspectives in complement of the energy present resources in order to answer the new needs. (M.B.) [French] Le present rapport dresse le bilan sur la consommation energetique actuelle et previsionnelle pour les vingt prochaines annees. L'energie actuelle provient principalement consommation de charbon, de produits petroliers et d'energie electrique essentiellement hydraulique. l'evolution du marche provient essentielement du developpement l'activite industriel et de nouvelles applications tributaire du cout et de la distribution de l'energie electrique. A cet effet, l'energie atomique offre de bonne perspectives industrielles en complement des sources actuelles energetiques afin de repondre aux nouveaux besoins. (M.B.)

  19. How Valid are Estimates of Occupational Illness?

    Science.gov (United States)

    Hilaski, Harvey J.; Wang, Chao Ling

    1982-01-01

    Examines some of the methods of estimating occupational diseases and suggests that a consensus on the adequacy and reliability of estimates by the Bureau of Labor Statistics and others is not likely. (SK)

  20. State estimation for a hexapod robot

    CSIR Research Space (South Africa)

    Lubbe, Estelle

    2015-09-01

    Full Text Available This paper introduces a state estimation methodology for a hexapod robot that makes use of proprioceptive sensors and a kinematic model of the robot. The methodology focuses on providing reliable full pose state estimation for a commercially...

  1. Access Based Cost Estimation for Beddown Analysis

    National Research Council Canada - National Science Library

    Pennington, Jasper E

    2006-01-01

    The purpose of this research is to develop an automated web-enabled beddown estimation application for Air Mobility Command in order to increase the effectiveness and enhance the robustness of beddown estimates...

  2. Estimated annual economic loss from organ condemnation ...

    African Journals Online (AJOL)

    as a basis for the analysis of estimation of the economic significance of bovine .... percent involvement of each organ were used in the estimation of the financial loss from organ .... DVM thesis, Addis Ababa University, Faculty of Veterinary.

  3. Velocity Estimate Following Air Data System Failure

    National Research Council Canada - National Science Library

    McLaren, Scott A

    2008-01-01

    .... A velocity estimator (VEST) algorithm was developed to combine the inertial and wind velocities to provide an estimate of the aircraft's current true velocity to be used for command path gain scheduling and for display in the cockpit...

  4. On Estimating Quantiles Using Auxiliary Information

    Directory of Open Access Journals (Sweden)

    Berger Yves G.

    2015-03-01

    Full Text Available We propose a transformation-based approach for estimating quantiles using auxiliary information. The proposed estimators can be easily implemented using a regression estimator. We show that the proposed estimators are consistent and asymptotically unbiased. The main advantage of the proposed estimators is their simplicity. Despite the fact the proposed estimators are not necessarily more efficient than their competitors, they offer a good compromise between accuracy and simplicity. They can be used under single and multistage sampling designs with unequal selection probabilities. A simulation study supports our finding and shows that the proposed estimators are robust and of an acceptable accuracy compared to alternative estimators, which can be more computationally intensive.

  5. On Estimation and Testing for Pareto Tails

    Czech Academy of Sciences Publication Activity Database

    Jordanova, P.; Stehlík, M.; Fabián, Zdeněk; Střelec, L.

    2013-01-01

    Roč. 22, č. 1 (2013), s. 89-108 ISSN 0204-9805 Institutional support: RVO:67985807 Keywords : testing against heavy tails * asymptotic properties of estimators * point estimation Subject RIV: BB - Applied Statistics, Operational Research

  6. Estimating the NIH efficient frontier.

    Directory of Open Access Journals (Sweden)

    Dimitrios Bisias

    Full Text Available BACKGROUND: The National Institutes of Health (NIH is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. METHODS AND FINDINGS: Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL. The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current or reduction in risk (22% to 35% vs. current are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. CONCLUSIONS: Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent

  7. Estimating the NIH efficient frontier.

    Science.gov (United States)

    Bisias, Dimitrios; Lo, Andrew W; Watkins, James F

    2012-01-01

    The National Institutes of Health (NIH) is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent, repeatable, and expressly designed to reduce the burden of

  8. Estimating the NIH Efficient Frontier

    Science.gov (United States)

    2012-01-01

    Background The National Institutes of Health (NIH) is among the world’s largest investors in biomedical research, with a mandate to: “…lengthen life, and reduce the burdens of illness and disability.” Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions–one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Methods and Findings Using data from 1965 to 2007, we provide estimates of the NIH “efficient frontier”, the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Conclusions Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent

  9. Estimation of population mean under systematic sampling

    Science.gov (United States)

    Noor-ul-amin, Muhammad; Javaid, Amjad

    2017-11-01

    In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.

  10. Fast and Statistically Efficient Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2016-01-01

    Fundamental frequency estimation is a very important task in many applications involving periodic signals. For computational reasons, fast autocorrelation-based estimation methods are often used despite parametric estimation methods having superior estimation accuracy. However, these parametric...... a recursive solver. Via benchmarks, we demonstrate that the computation time is reduced by approximately two orders of magnitude. The proposed fast algorithm is available for download online....

  11. Kernel bandwidth estimation for non-parametric density estimation: a comparative study

    CSIR Research Space (South Africa)

    Van der Walt, CM

    2013-12-01

    Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...

  12. Development of Numerical Estimation in Young Children

    Science.gov (United States)

    Siegler, Robert S.; Booth, Julie L.

    2004-01-01

    Two experiments examined kindergartners', first graders', and second graders' numerical estimation, the internal representations that gave rise to the estimates, and the general hypothesis that developmental sequences within a domain tend to repeat themselves in new contexts. Development of estimation in this age range on 0-to-100 number lines…

  13. Carleman estimates for some elliptic systems

    International Nuclear Information System (INIS)

    Eller, M

    2008-01-01

    A Carleman estimate for a certain first order elliptic system is proved. The proof is elementary and does not rely on pseudo-differential calculus. This estimate is used to prove Carleman estimates for the isotropic Lame system as well as for the isotropic Maxwell system with C 1 coefficients

  14. Estimating Canopy Dark Respiration for Crop Models

    Science.gov (United States)

    Monje Mejia, Oscar Alberto

    2014-01-01

    Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.

  15. Estimating uncertainty of data limited stock assessments

    DEFF Research Database (Denmark)

    Kokkalis, Alexandros; Eikeset, Anne Maria; Thygesen, Uffe Høgsbro

    2017-01-01

    -limited. Particular emphasis is put on providing uncertainty estimates of the data-limited assessment. We assess four cod stocks in the North-East Atlantic and compare our estimates of stock status (F/Fmsy) with the official assessments. The estimated stock status of all four cod stocks followed the established stock...

  16. Another look at the Grubbs estimators

    KAUST Repository

    Lombard, F.; Potgieter, C.J.

    2012-01-01

    of the estimate is to be within reasonable bounds and if negative precision estimates are to be avoided. We show that the two instrument Grubbs estimator can be improved considerably if fairly reliable preliminary information regarding the ratio of sampling unit

  17. Load Estimation by Frequency Domain Decomposition

    DEFF Research Database (Denmark)

    Pedersen, Ivar Chr. Bjerg; Hansen, Søren Mosegaard; Brincker, Rune

    2007-01-01

    When performing operational modal analysis the dynamic loading is unknown, however, once the modal properties of the structure have been estimated, the transfer matrix can be obtained, and the loading can be estimated by inverse filtering. In this paper loads in frequency domain are estimated by ...

  18. Non-Parametric Estimation of Correlation Functions

    DEFF Research Database (Denmark)

    Brincker, Rune; Rytter, Anders; Krenk, Steen

    In this paper three methods of non-parametric correlation function estimation are reviewed and evaluated: the direct method, estimation by the Fast Fourier Transform and finally estimation by the Random Decrement technique. The basic ideas of the techniques are reviewed, sources of bias are point...

  19. Bayesian techniques for surface fuel loading estimation

    Science.gov (United States)

    Kathy Gray; Robert Keane; Ryan Karpisz; Alyssa Pedersen; Rick Brown; Taylor Russell

    2016-01-01

    A study by Keane and Gray (2013) compared three sampling techniques for estimating surface fine woody fuels. Known amounts of fine woody fuel were distributed on a parking lot, and researchers estimated the loadings using different sampling techniques. An important result was that precise estimates of biomass required intensive sampling for both the planar intercept...

  20. Estimation of exposed dose, 1

    International Nuclear Information System (INIS)

    Okajima, Shunzo

    1976-01-01

    Radioactive atomic fallouts in Nishiyama district of Nagasaki Prefecture are reported on the basis of the survey since 1969. In 1969, the amount of 137 Cs in the body of 50 inhabitants in Nishiyama district was measured using human counter, and was compared with that of non-exposured group. The average value of 137 Cs (pCi/kg) was higher in inhabitants in Nishiyama district (38.5 in men and 24.9 in females) than in the controls (25.5 in men and 14.9 in females). The resurvey in 1971 showed that the amount of 137 Cs was decreased to 76% in men and 60% in females. When the amount of 137 Cs in the body was calculated from the chemical analysis of urine, it was 29.0 +- 8.2 in men and 29.4 +- 26.2 in females in Nishiyama district, and 29.9 +- 8.2 in men and 29.4 +- 11.7 in females in the controls. The content of 137 Cs in soils and crops (potato etc.) was higher in Nishiyama district than in the controls. When the internal exposure dose per year was calculated from the amount of 137 Cs in the body in 1969, it was 0.29 mrad/year in men and 0.19 mrad/year in females. Finally, the internal exposure dose immediately after the explosion was estimated. (Serizawa, K.)

  1. Inflation and cosmological parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, J.

    2007-05-15

    In this work, we focus on two aspects of cosmological data analysis: inference of parameter values and the search for new effects in the inflationary sector. Constraints on cosmological parameters are commonly derived under the assumption of a minimal model. We point out that this procedure systematically underestimates errors and possibly biases estimates, due to overly restrictive assumptions. In a more conservative approach, we analyse cosmological data using a more general eleven-parameter model. We find that regions of the parameter space that were previously thought ruled out are still compatible with the data; the bounds on individual parameters are relaxed by up to a factor of two, compared to the results for the minimal six-parameter model. Moreover, we analyse a class of inflation models, in which the slow roll conditions are briefly violated, due to a step in the potential. We show that the presence of a step generically leads to an oscillating spectrum and perform a fit to CMB and galaxy clustering data. We do not find conclusive evidence for a step in the potential and derive strong bounds on quantities that parameterise the step. (orig.)

  2. Quantum rewinding via phase estimation

    Science.gov (United States)

    Tabia, Gelo Noel

    2015-03-01

    In cryptography, the notion of a zero-knowledge proof was introduced by Goldwasser, Micali, and Rackoff. An interactive proof system is said to be zero-knowledge if any verifier interacting with an honest prover learns nothing beyond the validity of the statement being proven. With recent advances in quantum information technologies, it has become interesting to ask if classical zero-knowledge proof systems remain secure against adversaries with quantum computers. The standard approach to show the zero-knowledge property involves constructing a simulator for a malicious verifier that can be rewinded to a previous step when the simulation fails. In the quantum setting, the simulator can be described by a quantum circuit that takes an arbitrary quantum state as auxiliary input but rewinding becomes a nontrivial issue. Watrous proposed a quantum rewinding technique in the case where the simulation's success probability is independent of the auxiliary input. Here I present a more general quantum rewinding scheme that employs the quantum phase estimation algorithm. This work was funded by institutional research grant IUT2-1 from the Estonian Research Council and by the European Union through the European Regional Development Fund.

  3. Global Warming Estimation from MSU

    Science.gov (United States)

    Prabhakara, C.; Iacovazzi, Robert, Jr.

    1999-01-01

    In this study, we have developed time series of global temperature from 1980-97 based on the Microwave Sounding Unit (MSU) Ch 2 (53.74 GHz) observations taken from polar-orbiting NOAA operational satellites. In order to create these time series, systematic errors (approx. 0.1 K) in the Ch 2 data arising from inter-satellite differences are removed objectively. On the other hand, smaller systematic errors (approx. 0.03 K) in the data due to orbital drift of each satellite cannot be removed objectively. Such errors are expected to remain in the time series and leave an uncertainty in the inferred global temperature trend. With the help of a statistical method, the error in the MSU inferred global temperature trend resulting from orbital drifts and residual inter-satellite differences of all satellites is estimated to be 0.06 K decade. Incorporating this error, our analysis shows that the global temperature increased at a rate of 0.13 +/- 0.06 K decade during 1980-97.

  4. Estimates of LLEA officer availability

    International Nuclear Information System (INIS)

    Berkbigler, K.P.

    1978-05-01

    One element in the Physical Protection of Nuclear Material in Transit Program is a determination of the number of local law enforcement agency (LLEA) officers available to respond to an attack upon a special nuclear material (SNM) carrying convoy. A computer model, COPS, has been developed at Sandia Laboratories to address this problem. Its purposes are to help identify to the SNM shipper areas along a route which may have relatively low police coverage and to aid in the comparison of alternate routes to the same location. Data bases used in COPS include population data from the Bureau of Census and police data published by the FBI. Police are assumed to be distributed in proportion to the population, with adjustable weighting factors. Example results illustrating the model's capabilities are presented for two routes between Los Angeles, CA, and Denver, CO, and for two routes between Columbia, SC, and Syracuse, NY. The estimated police distribution at points along the route is presented. Police availability as a function of time is modeled based on the time-dependent characteristics of a trip. An example demonstrating the effects of jurisdictional restrictions on the size of the response force is given. Alternate routes between two locations are compared by means of cumulative plots

  5. Multimodal Estimation of Distribution Algorithms.

    Science.gov (United States)

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  6. Multivariate Location Estimation Using Extension of $R$-Estimates Through $U$-Statistics Type Approach

    OpenAIRE

    Chaudhuri, Probal

    1992-01-01

    We consider a class of $U$-statistics type estimates for multivariate location. The estimates extend some $R$-estimates to multivariate data. In particular, the class of estimates includes the multivariate median considered by Gini and Galvani (1929) and Haldane (1948) and a multivariate extension of the well-known Hodges-Lehmann (1963) estimate. We explore large sample behavior of these estimates by deriving a Bahadur type representation for them. In the process of developing these asymptoti...

  7. Indirect estimators in US federal programs

    CERN Document Server

    1996-01-01

    In 1991, a subcommittee of the Federal Committee on Statistical Methodology met to document the use of indirect estimators - that is, estimators which use data drawn from a domain or time different from the domain or time for which an estimate is required. This volume comprises the eight reports which describe the use of indirect estimators and they are based on case studies from a variety of federal programs. As a result, many researchers will find this book provides a valuable survey of how indirect estimators are used in practice and which addresses some of the pitfalls of these methods.

  8. Parameter Estimation in Continuous Time Domain

    Directory of Open Access Journals (Sweden)

    Gabriela M. ATANASIU

    2016-12-01

    Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.

  9. Site characterization: a spatial estimation approach

    International Nuclear Information System (INIS)

    Candy, J.V.; Mao, N.

    1980-10-01

    In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly

  10. A Gaussian IV estimator of cointegrating relations

    DEFF Research Database (Denmark)

    Bårdsen, Gunnar; Haldrup, Niels

    2006-01-01

    In static single equation cointegration regression modelsthe OLS estimator will have a non-standard distribution unless regressors arestrictly exogenous. In the literature a number of estimators have been suggestedto deal with this problem, especially by the use of semi-nonparametricestimators. T......In static single equation cointegration regression modelsthe OLS estimator will have a non-standard distribution unless regressors arestrictly exogenous. In the literature a number of estimators have been suggestedto deal with this problem, especially by the use of semi...... in cointegrating regressions. These instruments are almost idealand simulations show that the IV estimator using such instruments alleviatethe endogeneity problem extremely well in both finite and large samples....

  11. Optimal estimation of the optomechanical coupling strength

    Science.gov (United States)

    Bernád, József Zsolt; Sanavio, Claudio; Xuereb, André

    2018-06-01

    We apply the formalism of quantum estimation theory to obtain information about the value of the nonlinear optomechanical coupling strength. In particular, we discuss the minimum mean-square error estimator and a quantum Cramér-Rao-type inequality for the estimation of the coupling strength. Our estimation strategy reveals some cases where quantum statistical inference is inconclusive and merely results in the reinforcement of prior expectations. We show that these situations also involve the highest expected information losses. We demonstrate that interaction times on the order of one time period of mechanical oscillations are the most suitable for our estimation scenario, and compare situations involving different photon and phonon excitations.

  12. Bayesian estimation and tracking a practical guide

    CERN Document Server

    Haug, Anton J

    2012-01-01

    A practical approach to estimating and tracking dynamic systems in real-worl applications Much of the literature on performing estimation for non-Gaussian systems is short on practical methodology, while Gaussian methods often lack a cohesive derivation. Bayesian Estimation and Tracking addresses the gap in the field on both accounts, providing readers with a comprehensive overview of methods for estimating both linear and nonlinear dynamic systems driven by Gaussian and non-Gaussian noices. Featuring a unified approach to Bayesian estimation and tracking, the book emphasizes the derivation

  13. Budget estimates. Fiscal year 1998

    International Nuclear Information System (INIS)

    1997-02-01

    The U.S. Congress has determined that the safe use of nuclear materials for peaceful purposes is a legitimate and important national goal. It has entrusted the Nuclear Regulatory Commission (NRC) with the primary Federal responsibility for achieving that goal. The NRC's mission, therefore, is to regulate the Nation's civilian use of byproduct, source, and special nuclear materials to ensure adequate protection of public health and safety, to promote the common defense and security, and to protect the environment. The NRC's FY 1998 budget requests new budget authority of $481,300,000 to be funded by two appropriations - one is the NRC's Salaraies and Expenses appropriation for $476,500,000, and the other is NRC's Office of Inspector General appropriation for $4,800,000. Of the funds appropriated to the NRC's Salaries and Expenses, $17,000,000, shall be derived from the Nuclear Waste Fund and $2,000,000 shall be derived from general funds. The proposed FY 1998 appropriation legislation would also exempt the $2,000,000 for regulatory reviews and other assistance provided to the Department of Energy from the requirement that the NRC collect 100 percent of its budget from fees. The sums appropriated to the NRC's Salaries and Expenses and NRC's Office of Inspector General shall be reduced by the amount of revenues received during FY 1998 from licensing fees, inspection services, and other services and collections, so as to result in a final FY 1998 appropriation for the NRC of an estimated $19,000,000 - the amount appropriated from the Nuclear Waste Fund and from general funds. Revenues derived from enforcement actions shall be deposited to miscellaneous receipts of the Treasury

  14. Budget estimates. Fiscal year 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-02-01

    The U.S. Congress has determined that the safe use of nuclear materials for peaceful purposes is a legitimate and important national goal. It has entrusted the Nuclear Regulatory Commission (NRC) with the primary Federal responsibility for achieving that goal. The NRC`s mission, therefore, is to regulate the Nation`s civilian use of byproduct, source, and special nuclear materials to ensure adequate protection of public health and safety, to promote the common defense and security, and to protect the environment. The NRC`s FY 1998 budget requests new budget authority of $481,300,000 to be funded by two appropriations - one is the NRC`s Salaraies and Expenses appropriation for $476,500,000, and the other is NRC`s Office of Inspector General appropriation for $4,800,000. Of the funds appropriated to the NRC`s Salaries and Expenses, $17,000,000, shall be derived from the Nuclear Waste Fund and $2,000,000 shall be derived from general funds. The proposed FY 1998 appropriation legislation would also exempt the $2,000,000 for regulatory reviews and other assistance provided to the Department of Energy from the requirement that the NRC collect 100 percent of its budget from fees. The sums appropriated to the NRC`s Salaries and Expenses and NRC`s Office of Inspector General shall be reduced by the amount of revenues received during FY 1998 from licensing fees, inspection services, and other services and collections, so as to result in a final FY 1998 appropriation for the NRC of an estimated $19,000,000 - the amount appropriated from the Nuclear Waste Fund and from general funds. Revenues derived from enforcement actions shall be deposited to miscellaneous receipts of the Treasury.

  15. Optimal estimations of random fields using kriging

    International Nuclear Information System (INIS)

    Barua, G.

    2004-01-01

    Kriging is a statistical procedure of estimating the best weights of a linear estimator. Suppose there is a point or an area or a volume of ground over which we do not know a hydrological variable and wish to estimate it. In order to produce an estimator, we need some information to work on, usually available in the form of samples. There can, be an infinite number of linear unbiased estimators for which the weights sum up to one. The problem is how to determine the best weights for which the estimation variance is the least. The system of equations as shown above is generally known as the kriging system and the estimator produced is the kriging estimator. The variance of the kriging estimator can be found by substitution of the weights in the general estimation variance equation. We assume here a linear model for the semi-variogram. Applying the model to the equation, we obtain a set of kriging equations. By solving these equations, we obtain the kriging variance. Thus, for the one-dimensional problem considered, kriging definitely gives a better estimation variance than the extension variance

  16. Monte Carlo-based tail exponent estimator

    Science.gov (United States)

    Barunik, Jozef; Vacha, Lukas

    2010-11-01

    In this paper we propose a new approach to estimation of the tail exponent in financial stock markets. We begin the study with the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations, we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our proposed method is not sensitive to the choice of tail size and works well also on small data samples. The new estimator also gives unbiased results with symmetrical confidence intervals. Finally, we demonstrate the power of our estimator on the international world stock market indices. On the two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent.

  17. Robust bearing estimation for 3-component stations

    International Nuclear Information System (INIS)

    CLAASSEN, JOHN P.

    2000-01-01

    A robust bearing estimation process for 3-component stations has been developed and explored. The method, called SEEC for Search, Estimate, Evaluate and Correct, intelligently exploits the inherent information in the arrival at every step of the process to achieve near-optimal results. In particular the approach uses a consistent framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, to construct metrics helpful in choosing the better estimates or admitting that the bearing is immeasurable, and finally to apply bias corrections when calibration information is available to yield a single final estimate. The algorithm was applied to a small but challenging set of events in a seismically active region. It demonstrated remarkable utility by providing better estimates and insights than previously available. Various monitoring implications are noted from these findings

  18. Iterative Estimation in Turbo Equalization Process

    Directory of Open Access Journals (Sweden)

    MORGOS Lucian

    2014-05-01

    Full Text Available This paper presents the iterative estimation in turbo equalization process. Turbo equalization is the process of reception in which equalization and decoding are done together, not as separate processes. For the equalizer to work properly, it must receive before equalization accurate information about the value of the channel impulse response. This estimation of channel impulse response is done by transmission of a training sequence known at reception. Knowing both the transmitted and received sequence, it can be calculated estimated value of the estimated the channel impulse response using one of the well-known estimation algorithms. The estimated value can be also iterative recalculated based on the sequence data available at the output of the channel and estimated sequence data coming from turbo equalizer output, thereby refining the obtained results.

  19. Weighted conditional least-squares estimation

    International Nuclear Information System (INIS)

    Booth, J.G.

    1987-01-01

    A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered

  20. COVARIANCE ASSISTED SCREENING AND ESTIMATION.

    Science.gov (United States)

    Ke, By Tracy; Jin, Jiashun; Fan, Jianqing

    2014-11-01

    Consider a linear model Y = X β + z , where X = X n,p and z ~ N (0, I n ). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X ' X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage , which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening , and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model.

  1. Atmospheric Turbulence Estimates from a Pulsed Lidar

    Science.gov (United States)

    Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.

    2013-01-01

    Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.

  2. Cosmochemical Estimates of Mantle Composition

    Science.gov (United States)

    Palme, H.; O'Neill, H. St. C.

    2003-12-01

    , and a crust. Both Daubrée and Boisse also expected that the Earth was composed of a similar sequence of concentric layers (see Burke, 1986; Marvin, 1996).At the beginning of the twentieth century Harkins at the University of Chicago thought that meteorites would provide a better estimate for the bulk composition of the Earth than the terrestrial rocks collected at the surface as we have only access to the "mere skin" of the Earth. Harkins made an attempt to reconstruct the composition of the hypothetical meteorite planet by compiling compositional data for 125 stony and 318 iron meteorites, and mixing the two components in ratios based on the observed falls of stones and irons. The results confirmed his prediction that elements with even atomic numbers are more abundant and therefore more stable than those with odd atomic numbers and he concluded that the elemental abundances in the bulk meteorite planet are determined by nucleosynthetic processes. For his meteorite planet Harkins calculated Mg/Si, Al/Si, and Fe/Si atomic ratios of 0.86, 0.079, and 0.83, very closely resembling corresponding ratios of the average solar system based on presently known element abundances in the Sun and in CI-meteorites (see Burke, 1986).If the Earth were similar compositionally to the meteorite planet, it should have a similarly high iron content, which requires that the major fraction of iron is concentrated in the interior of the Earth. The presence of a central metallic core to the Earth was suggested by Wiechert in 1897. The existence of the core was firmly established using the study of seismic wave propagation by Oldham in 1906 with the outer boundary of the core accurately located at a depth of 2,900km by Beno Gutenberg in 1913. In 1926 the fluidity of the outer core was finally accepted. The high density of the core and the high abundance of iron and nickel in meteorites led very early to the suggestion that iron and nickel are the dominant elements in the Earth's core (Brush

  3. Entropy estimates of small data sets

    Energy Technology Data Exchange (ETDEWEB)

    Bonachela, Juan A; Munoz, Miguel A [Departamento de Electromagnetismo y Fisica de la Materia and Instituto de Fisica Teorica y Computacional Carlos I, Facultad de Ciencias, Universidad de Granada, 18071 Granada (Spain); Hinrichsen, Haye [Fakultaet fuer Physik und Astronomie, Universitaet Wuerzburg, Am Hubland, 97074 Wuerzburg (Germany)

    2008-05-23

    Estimating entropies from limited data series is known to be a non-trivial task. Naive estimations are plagued with both systematic (bias) and statistical errors. Here, we present a new 'balanced estimator' for entropy functionals (Shannon, Renyi and Tsallis) specially devised to provide a compromise between low bias and small statistical errors, for short data series. This new estimator outperforms other currently available ones when the data sets are small and the probabilities of the possible outputs of the random variable are not close to zero. Otherwise, other well-known estimators remain a better choice. The potential range of applicability of this estimator is quite broad specially for biological and digital data series. (fast track communication)

  4. Entropy estimates of small data sets

    International Nuclear Information System (INIS)

    Bonachela, Juan A; Munoz, Miguel A; Hinrichsen, Haye

    2008-01-01

    Estimating entropies from limited data series is known to be a non-trivial task. Naive estimations are plagued with both systematic (bias) and statistical errors. Here, we present a new 'balanced estimator' for entropy functionals (Shannon, Renyi and Tsallis) specially devised to provide a compromise between low bias and small statistical errors, for short data series. This new estimator outperforms other currently available ones when the data sets are small and the probabilities of the possible outputs of the random variable are not close to zero. Otherwise, other well-known estimators remain a better choice. The potential range of applicability of this estimator is quite broad specially for biological and digital data series. (fast track communication)

  5. Relative Pose Estimation Algorithm with Gyroscope Sensor

    Directory of Open Access Journals (Sweden)

    Shanshan Wei

    2016-01-01

    Full Text Available This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1 Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2 Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.

  6. Nondestructive, stereological estimation of canopy surface area

    DEFF Research Database (Denmark)

    Wulfsohn, Dvora-Laio; Sciortino, Marco; Aaslyng, Jesper M.

    2010-01-01

    We describe a stereological procedure to estimate the total leaf surface area of a plant canopy in vivo, and address the problem of how to predict the variance of the corresponding estimator. The procedure involves three nested systematic uniform random sampling stages: (i) selection of plants from...... a canopy using the smooth fractionator, (ii) sampling of leaves from the selected plants using the fractionator, and (iii) area estimation of the sampled leaves using point counting. We apply this procedure to estimate the total area of a chrysanthemum (Chrysanthemum morifolium L.) canopy and evaluate both...... the time required and the precision of the estimator. Furthermore, we compare the precision of point counting for three different grid intensities with that of several standard leaf area measurement techniques. Results showed that the precision of the plant leaf area estimator based on point counting...

  7. Resilient Distributed Estimation Through Adversary Detection

    Science.gov (United States)

    Chen, Yuan; Kar, Soummya; Moura, Jose M. F.

    2018-05-01

    This paper studies resilient multi-agent distributed estimation of an unknown vector parameter when a subset of the agents is adversarial. We present and analyze a Flag Raising Distributed Estimator ($\\mathcal{FRDE}$) that allows the agents under attack to perform accurate parameter estimation and detect the adversarial agents. The $\\mathcal{FRDE}$ algorithm is a consensus+innovations estimator in which agents combine estimates of neighboring agents (consensus) with local sensing information (innovations). We establish that, under $\\mathcal{FRDE}$, either the uncompromised agents' estimates are almost surely consistent or the uncompromised agents detect compromised agents if and only if the network of uncompromised agents is connected and globally observable. Numerical examples illustrate the performance of $\\mathcal{FRDE}$.

  8. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  9. Estimating the Doppler centroid of SAR data

    DEFF Research Database (Denmark)

    Madsen, Søren Nørvang

    1989-01-01

    attractive properties. An evaluation based on an existing SEASAT processor is reported. The time-domain algorithms are shown to be extremely efficient with respect to requirements on calculations and memory, and hence they are well suited to real-time systems where the Doppler estimation is based on raw SAR......After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to have...... data. For offline processors where the Doppler estimation is performed on processed data, which removes the problem of partial coverage of bright targets, the ΔE estimator and the CDE (correlation Doppler estimator) algorithm give similar performance. However, for nonhomogeneous scenes it is found...

  10. Science yield estimation for AFTA coronagraphs

    Science.gov (United States)

    Traub, Wesley A.; Belikov, Ruslan; Guyon, Olivier; Kasdin, N. Jeremy; Krist, John; Macintosh, Bruce; Mennesson, Bertrand; Savransky, Dmitry; Shao, Michael; Serabyn, Eugene; Trauger, John

    2014-08-01

    We describe the algorithms and results of an estimation of the science yield for five candidate coronagraph designs for the WFIRST-AFTA space mission. The targets considered are of three types, known radial-velocity planets, expected but as yet undiscovered exoplanets, and debris disks, all around nearby stars. The results of the original estimation are given, as well as those from subsequently updated designs that take advantage of experience from the initial estimates.

  11. Estimating Elevation Angles From SAR Crosstalk

    Science.gov (United States)

    Freeman, Anthony

    1994-01-01

    Scheme for processing polarimetric synthetic-aperture-radar (SAR) image data yields estimates of elevation angles along radar beam to target resolution cells. By use of estimated elevation angles, measured distances along radar beam to targets (slant ranges), and measured altitude of aircraft carrying SAR equipment, one can estimate height of target terrain in each resolution cell. Monopulselike scheme yields low-resolution topographical data.

  12. Robust motion estimation using connected operators

    OpenAIRE

    Salembier Clairon, Philippe Jean; Sanson, H

    1997-01-01

    This paper discusses the use of connected operators for robust motion estimation The proposed strategy involves a motion estimation step extracting the dominant motion and a ltering step relying on connected operators that remove objects that do not fol low the dominant motion. These two steps are iterated in order to obtain an accurate motion estimation and a precise de nition of the objects fol lowing this motion This strategy can be applied on the entire frame or on individual connected c...

  13. Application of spreadsheet to estimate infiltration parameters

    OpenAIRE

    Zakwan, Mohammad; Muzzammil, Mohammad; Alam, Javed

    2016-01-01

    Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach ...

  14. Dynamic Diffusion Estimation in Exponential Family Models

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    2013-01-01

    Roč. 20, č. 11 (2013), s. 1114-1117 ISSN 1070-9908 R&D Projects: GA MŠk 7D12004; GA ČR GA13-13502S Keywords : diffusion estimation * distributed estimation * paremeter estimation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.639, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0396518.pdf

  15. State energy data report 1994: Consumption estimates

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-10-01

    This document provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), operated by EIA. SEDS provides State energy consumption estimates to members of Congress, Federal and State agencies, and the general public, and provides the historical series needed for EIA`s energy models. Division is made for each energy type and end use sector. Nuclear electric power is included.

  16. Self-learning estimation of quantum states

    International Nuclear Information System (INIS)

    Hannemann, Th.; Reiss, D.; Balzer, Ch.; Neuhauser, W.; Toschek, P.E.; Wunderlich, Ch.

    2002-01-01

    We report the experimental estimation of arbitrary qubit states using a succession of N measurements on individual qubits, where the measurement basis is changed during the estimation procedure conditioned on the outcome of previous measurements (self-learning estimation). Two hyperfine states of a single trapped 171 Yb + ion serve as a qubit. It is demonstrated that the difference in fidelity between this adaptive strategy and passive strategies increases in the presence of decoherence

  17. Estimation of Correlation Functions by Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    This paper illustrates how correlation functions can be estimated by the random decrement technique. Several different formulations of the random decrement technique, estimating the correlation functions are considered. The speed and accuracy of the different formulations of the random decrement...... and the length of the correlation functions. The accuracy of the estimates with respect to the theoretical correlation functions and the modal parameters are both investigated. The modal parameters are extracted from the correlation functions using the polyreference time domain technique....

  18. State energy data report 1994: Consumption estimates

    International Nuclear Information System (INIS)

    1996-10-01

    This document provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), operated by EIA. SEDS provides State energy consumption estimates to members of Congress, Federal and State agencies, and the general public, and provides the historical series needed for EIA's energy models. Division is made for each energy type and end use sector. Nuclear electric power is included

  19. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  20. Improved diagnostic model for estimating wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Endlich, R.M.; Lee, J.D.

    1983-03-01

    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  1. Outer planet probe cost estimates: First impressions

    Science.gov (United States)

    Niehoff, J.

    1974-01-01

    An examination was made of early estimates of outer planetary atmospheric probe cost by comparing the estimates with past planetary projects. Of particular interest is identification of project elements which are likely cost drivers for future probe missions. Data are divided into two parts: first, the description of a cost model developed by SAI for the Planetary Programs Office of NASA, and second, use of this model and its data base to evaluate estimates of probe costs. Several observations are offered in conclusion regarding the credibility of current estimates and specific areas of the outer planet probe concept most vulnerable to cost escalation.

  2. Application of spreadsheet to estimate infiltration parameters

    Directory of Open Access Journals (Sweden)

    Mohammad Zakwan

    2016-09-01

    Full Text Available Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach for estimation of infiltration parameters often fails to estimate the infiltration parameters precisely. The generalised reduced gradient (GRG solver is reported to be a powerful tool for estimating parameters of nonlinear equations and it has, therefore, been implemented to estimate the infiltration parameters in the present paper. Field data of infiltration rate available in literature for sandy loam soils of Umuahia, Nigeria were used to evaluate the performance of GRG solver. A comparative study of graphical method and GRG solver shows that the performance of GRG solver is better than that of conventional graphical method for estimation of infiltration rates. Further, the performance of Kostiakov model has been found to be better than the Horton and Philip's model in most of the cases based on both the approaches of parameter estimation.

  3. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  4. Track length estimation applied to point detectors

    International Nuclear Information System (INIS)

    Rief, H.; Dubi, A.; Elperin, T.

    1984-01-01

    The concept of the track length estimator is applied to the uncollided point flux estimator (UCF) leading to a new algorithm of calculating fluxes at a point. It consists essentially of a line integral of the UCF, and although its variance is unbounded, the convergence rate is that of a bounded variance estimator. In certain applications, involving detector points in the vicinity of collimated beam sources, it has a lower variance than the once-more-collided point flux estimator, and its application is more straightforward

  5. OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M. D.; Chou, H. H.; Gwinn, C. R., E-mail: michaeltdh@physics.ucsb.edu, E-mail: cgwinn@physics.ucsb.edu [Department of Physics, University of California, Santa Barbara, CA 93106 (United States)

    2013-03-10

    Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.

  6. Linear Covariance Analysis and Epoch State Estimators

    Science.gov (United States)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  7. Surface tensor estimation from linear sections

    DEFF Research Database (Denmark)

    Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel

    From Crofton's formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....

  8. Surface tensor estimation from linear sections

    DEFF Research Database (Denmark)

    Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel

    2015-01-01

    From Crofton’s formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....

  9. OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS

    International Nuclear Information System (INIS)

    Johnson, M. D.; Chou, H. H.; Gwinn, C. R.

    2013-01-01

    Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.

  10. Load Estimation from Natural input Modal Analysis

    DEFF Research Database (Denmark)

    Aenlle, Manuel López; Brincker, Rune; Canteli, Alfonso Fernández

    2005-01-01

    One application of Natural Input Modal Analysis consists in estimating the unknown load acting on structures such as wind loads, wave loads, traffic loads, etc. In this paper, a procedure to determine loading from a truncated modal model, as well as the results of an experimental testing programme...... estimation. In the experimental program a small structure subjected to vibration was used to estimate the loading from the measurements and the experimental modal space. The modal parameters were estimated by Natural Input Modal Analysis and the scaling factors of the mode shapes obtained by the mass change...

  11. Towards Greater Harmonisation of Decommissioning Cost Estimates

    International Nuclear Information System (INIS)

    O'Sullivan, Patrick; ); Laraia, Michele; ); LaGuardia, Thomas S.

    2010-01-01

    The NEA Decommissioning Cost Estimation Group (DCEG), in collaboration with the IAEA Waste Technology Section and the EC Directorate-General for Energy and Transport, has recently studied cost estimation practices in 12 countries - Belgium, Canada, France, Germany, Italy, Japan, the Netherlands, Slovakia, Spain, Sweden, the United Kingdom and the United States. Its findings are to be published in an OECD/NEA report entitled Cost Estimation for Decommissioning: An International Overview of Cost Elements, Estimation Practices and Reporting Requirements. This booklet highlights the findings contained in the full report. (authors)

  12. Accuracy of prehospital transport time estimation.

    Science.gov (United States)

    Wallace, David J; Kahn, Jeremy M; Angus, Derek C; Martin-Gill, Christian; Callaway, Clifton W; Rea, Thomas D; Chhatwal, Jagpreet; Kurland, Kristen; Seymour, Christopher W

    2014-01-01

    Estimates of prehospital transport times are an important part of emergency care system research and planning; however, the accuracy of these estimates is unknown. The authors examined the accuracy of three estimation methods against observed transport times in a large cohort of prehospital patient transports. This was a validation study using prehospital records in King County, Washington, and southwestern Pennsylvania from 2002 to 2006 and 2005 to 2011, respectively. Transport time estimates were generated using three methods: linear arc distance, Google Maps, and ArcGIS Network Analyst. Estimation error, defined as the absolute difference between observed and estimated transport time, was assessed, as well as the proportion of estimated times that were within specified error thresholds. Based on the primary results, a regression estimate was used that incorporated population density, time of day, and season to assess improved accuracy. Finally, hospital catchment areas were compared using each method with a fixed drive time. The authors analyzed 29,935 prehospital transports to 44 hospitals. The mean (± standard deviation [±SD]) absolute error was 4.8 (±7.3) minutes using linear arc, 3.5 (±5.4) minutes using Google Maps, and 4.4 (±5.7) minutes using ArcGIS. All pairwise comparisons were statistically significant (p Google Maps, and 11.6 [±10.9] minutes for ArcGIS). Estimates were within 5 minutes of observed transport time for 79% of linear arc estimates, 86.6% of Google Maps estimates, and 81.3% of ArcGIS estimates. The regression-based approach did not substantially improve estimation. There were large differences in hospital catchment areas estimated by each method. Route-based transport time estimates demonstrate moderate accuracy. These methods can be valuable for informing a host of decisions related to the system organization and patient access to emergency medical care; however, they should be employed with sensitivity to their limitations.

  13. Cost Estimating Handbook for Environmental Restoration

    International Nuclear Information System (INIS)

    1993-01-01

    Environmental restoration (ER) projects have presented the DOE and cost estimators with a number of properties that are not comparable to the normal estimating climate within DOE. These properties include: An entirely new set of specialized expressions and terminology. A higher than normal exposure to cost and schedule risk, as compared to most other DOE projects, due to changing regulations, public involvement, resource shortages, and scope of work. A higher than normal percentage of indirect costs to the total estimated cost due primarily to record keeping, special training, liability, and indemnification. More than one estimate for a project, particularly in the assessment phase, in order to provide input into the evaluation of alternatives for the cleanup action. While some aspects of existing guidance for cost estimators will be applicable to environmental restoration projects, some components of the present guidelines will have to be modified to reflect the unique elements of these projects. The purpose of this Handbook is to assist cost estimators in the preparation of environmental restoration estimates for Environmental Restoration and Waste Management (EM) projects undertaken by DOE. The DOE has, in recent years, seen a significant increase in the number, size, and frequency of environmental restoration projects that must be costed by the various DOE offices. The coming years will show the EM program to be the largest non-weapons program undertaken by DOE. These projects create new and unique estimating requirements since historical cost and estimating precedents are meager at best. It is anticipated that this Handbook will enhance the quality of cost data within DOE in several ways by providing: The basis for accurate, consistent, and traceable baselines. Sound methodologies, guidelines, and estimating formats. Sources of cost data/databases and estimating tools and techniques available at DOE cost professionals

  14. L’estime de soi : un cas particulier d’estime sociale ?

    OpenAIRE

    Santarelli, Matteo

    2016-01-01

    Un des traits plus originaux de la théorie intersubjective de la reconnaissance d’Axel Honneth, consiste dans la façon dont elle discute la relation entre estime sociale et estime de soi. En particulier, Honneth présente l’estime de soi comme un reflet de l’estime sociale au niveau individuel. Dans cet article, je discute cette conception, en posant la question suivante : l’estime de soi est-elle un cas particulier de l’estime sociale ? Pour ce faire, je me concentre sur deux problèmes crucia...

  15. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  16. The Problems of Multiple Feedback Estimation.

    Science.gov (United States)

    Bulcock, Jeffrey W.

    The use of two-stage least squares (2SLS) for the estimation of feedback linkages is inappropriate for nonorthogonal data sets because 2SLS is extremely sensitive to multicollinearity. It is argued that what is needed is use of a different estimating criterion than the least squares criterion. Theoretically the variance normalization criterion has…

  17. Spectral Estimation by the Random Dec Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, Jacob L.; Krenk, Steen

    1990-01-01

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  18. Spectral Estimation by the Random DEC Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, J. Laigaard; Krenk, S.

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  19. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  20. Fuel Burn Estimation Using Real Track Data

    Science.gov (United States)

    Chatterji, Gano B.

    2011-01-01

    A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.

  1. Uncertainty Measures of Regional Flood Frequency Estimators

    DEFF Research Database (Denmark)

    Rosbjerg, Dan; Madsen, Henrik

    1995-01-01

    Regional flood frequency models have different assumptions regarding homogeneity and inter-site independence. Thus, uncertainty measures of T-year event estimators are not directly comparable. However, having chosen a particular method, the reliability of the estimate should always be stated, e...

  2. Multisensor simultaneous vehicle tracking and shape estimation

    NARCIS (Netherlands)

    Elfring, J.; Appeldoorn, R.P.W.; Kwakkernaat, M.R.J.A.E.

    2016-01-01

    This work focuses on vehicle automation applications that require both the estimation of kinematic and geometric information of surrounding vehicles, e.g., automated overtaking or merging. Rather then using one sensor that is able to estimate a vehicle's geometry from each sensor frame, e.g., a

  3. Decommissioning Cost Estimating -The ''Price'' Approach

    International Nuclear Information System (INIS)

    Manning, R.; Gilmour, J.

    2002-01-01

    Over the past 9 years UKAEA has developed a formalized approach to decommissioning cost estimating. The estimating methodology and computer-based application are known collectively as the PRICE system. At the heart of the system is a database (the knowledge base) which holds resource demand data on a comprehensive range of decommissioning activities. This data is used in conjunction with project specific information (the quantities of specific components) to produce decommissioning cost estimates. PRICE is a dynamic cost-estimating tool, which can satisfy both strategic planning and project management needs. With a relatively limited analysis a basic PRICE estimate can be produced and used for the purposes of strategic planning. This same estimate can be enhanced and improved, primarily by the improvement of detail, to support sanction expenditure proposals, and also as a tender assessment and project management tool. The paper will: describe the principles of the PRICE estimating system; report on the experiences of applying the system to a wide range of projects from contaminated car parks to nuclear reactors; provide information on the performance of the system in relation to historic estimates, tender bids, and outturn costs

  4. Estimation of biochemical variables using quantumbehaved particle ...

    African Journals Online (AJOL)

    To generate a more efficient neural network estimator, we employed the previously proposed quantum-behaved particle swarm optimization (QPSO) algorithm for neural network training. The experiment results of L-glutamic acid fermentation process showed that our established estimator could predict variables such as the ...

  5. Estimated water use in Puerto Rico, 2010

    Science.gov (United States)

    Molina-Rivera, Wanda L.

    2014-01-01

    Water-use data were aggregated for the 78 municipios of the Commonwealth of Puerto Rico for 2010. Five major offstream categories were considered: public-supply water withdrawals and deliveries, domestic and industrial self-supplied water use, crop-irrigation water use, and thermoelectric-power freshwater use. One instream water-use category also was compiled: power-generation instream water use (thermoelectric saline withdrawals and hydroelectric power). Freshwater withdrawals for offstream use from surface-water [606 million gallons per day (Mgal/d)] and groundwater (118 Mgal/d) sources in Puerto Rico were estimated at 724 million gallons per day. The largest amount of freshwater withdrawn was by public-supply water facilities estimated at 677 Mgal/d. Public-supply domestic water use was estimated at 206 Mgal/d. Fresh groundwater withdrawals by domestic self-supplied users were estimated at 2.41 Mgal/d. Industrial self-supplied withdrawals were estimated at 4.30 Mgal/d. Withdrawals for crop irrigation purposes were estimated at 38.2 Mgal/d, or approximately 5 percent of all offstream freshwater withdrawals. Instream freshwater withdrawals by hydroelectric facilities were estimated at 556 Mgal/d and saline instream surface-water withdrawals for cooling purposes by thermoelectric-power facilities was estimated at 2,262 Mgal/d.

  6. Statistical inference based on latent ability estimates

    NARCIS (Netherlands)

    Hoijtink, H.J.A.; Boomsma, A.

    The quality of approximations to first and second order moments (e.g., statistics like means, variances, regression coefficients) based on latent ability estimates is being discussed. The ability estimates are obtained using either the Rasch, oi the two-parameter logistic model. Straightforward use

  7. Uranium mill tailings and risk estimation

    International Nuclear Information System (INIS)

    Marks, S.

    1984-04-01

    Work done in estimating projected health effects for persons exposed to mill tailings at vicinity properties is described. The effect of the reassessment of exposures at Hiroshima and Nagasaki on the risk estimates for gamma radiation is discussed. A presentation of current results in the epidemiological study of Hanford workers is included. 2 references

  8. New U.S. Foodborne Illness Estimate

    Centers for Disease Control (CDC) Podcasts

    This podcast discusses CDC's report on new estimates of illnesses due to eating contaminated food in the United States. Dr. Elaine Scallan, assistant professor at the University of Colorado and former lead of the CDCs FoodNet surveillance system, shares the details from the first new comprehensive estimates of foodborne illness in the U.S. since 1999.

  9. Estimating light-vehicle sales in Turkey

    Directory of Open Access Journals (Sweden)

    Ufuk Demiroğlu

    2016-09-01

    Full Text Available This paper is motivated by the surprising rapid growth of new light-vehicle sales in Turkey in 2015. Domestic sales grew 25%, dramatically surpassing the industry estimates of around 8%. Our approach is to inform the sales trend estimate with the information obtained from the light-vehicle stock (the number of cars and light trucks officially registered in the country, and the scrappage data. More specifically, we improve the sales trend estimate by estimating the trend of its stock. Using household data, we show that an important reason for the rapid sales growth is that an increasing share of household budgets is spent on automobile purchases. The elasticity of light-vehicle sales to cyclical changes in aggregate demand is high and robust; its estimates are around 6 with a standard deviation of about 0.5. The price elasticity of light-vehicle sales is estimated to be about 0.8, but the estimates are imprecise and not robust. We estimate the trend level of light-vehicle sales to be roughly 7 percent of the existing stock. A remarkable out-of-sample forecast performance is obtained for horizons up to nearly a decade by a regression equation using only a cyclical gap measure, the time trend and obvious policy dummies. Various specifications suggest that the strong 2015 growth of light-vehicle sales was predictable in late 2014.

  10. TP89 - SIRZ Decomposition Spectral Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Seetho, Isacc M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Azevedo, Steve [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, Jerel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brown, William D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martz, Jr., Harry E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-12-08

    The primary objective of this test plan is to provide X-ray CT measurements of known materials for the purposes of generating and testing MicroCT and EDS spectral estimates. These estimates are to be used in subsequent Ze/RhoE decomposition analyses of acquired data.

  11. Efficient Estimating Functions for Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt

    The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...

  12. Estimating Conditional Distributions by Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1998-01-01

    Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...

  13. Velocity Estimation in Medical Ultrasound [Life Sciences

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Villagómez Hoyos, Carlos Armando; Holbek, Simon

    2017-01-01

    This article describes the application of signal processing in medical ultrasound velocity estimation. Special emphasis is on the relation among acquisition methods, signal processing, and estimators employed. The description spans from current clinical systems for one-and two-dimensional (1-D an...

  14. Varieties of Quantity Estimation in Children

    Science.gov (United States)

    Sella, Francesco; Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco

    2015-01-01

    In the number-to-position task, with increasing age and numerical expertise, children's pattern of estimates shifts from a biased (nonlinear) to a formal (linear) mapping. This widely replicated finding concerns symbolic numbers, whereas less is known about other types of quantity estimation. In Experiment 1, Preschool, Grade 1, and Grade 3…

  15. Estimating functions for inhomogeneous Cox processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2006-01-01

    Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples.......Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples....

  16. Kalman filter to update forest cover estimates

    Science.gov (United States)

    Raymond L. Czaplewski

    1990-01-01

    The Kalman filter is a statistical estimator that combines a time-series of independent estimates, using a prediction model that describes expected changes in the state of a system over time. An expensive inventory can be updated using model predictions that are adjusted with more recent, but less expensive and precise, monitoring data. The concepts of the Kalman...

  17. Linearized motion estimation for articulated planes.

    Science.gov (United States)

    Datta, Ankur; Sheikh, Yaser; Kanade, Takeo

    2011-04-01

    In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.

  18. Body composition estimation from selected slices

    DEFF Research Database (Denmark)

    Lacoste Jeanson, Alizé; Dupej, Ján; Villa, Chiara

    2017-01-01

    Background Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total...

  19. Differences between carbon budget estimates unravelled

    NARCIS (Netherlands)

    Rogelj, Joeri; Schaeffer, Michiel; Friedlingstein, Pierre; Gillett, Nathan P.; Vuuren, Van Detlef P.; Riahi, Keywan; Allen, Myles; Knutti, Reto

    2016-01-01

    Several methods exist to estimate the cumulative carbon emissions that would keep global warming to below a given temperature limit. Here we review estimates reported by the IPCC and the recent literature, and discuss the reasons underlying their differences. The most scientifically robust

  20. Differences between carbon budget estimates unravelled

    NARCIS (Netherlands)

    Rogelj, Joeri; Schaeffer, Michiel; Friedlingstein, Pierre; Gillett, Nathan P.; Van Vuuren, Detlef P.|info:eu-repo/dai/nl/11522016X; Riahi, Keywan; Allen, Myles; Knutti, Reto

    2016-01-01

    Several methods exist to estimate the cumulative carbon emissions that would keep global warming to below a given temperature limit. Here we review estimates reported by the IPCC and the recent literature, and discuss the reasons underlying their differences. The most scientifically robust

  1. Nonparametric estimation in models for unobservable heterogeneity

    OpenAIRE

    Hohmann, Daniel

    2014-01-01

    Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.

  2. Estimates of Uncertainty around the RBA's Forecasts

    OpenAIRE

    Peter Tulip; Stephanie Wallace

    2012-01-01

    We use past forecast errors to construct confidence intervals and other estimates of uncertainty around the Reserve Bank of Australia's forecasts of key macroeconomic variables. Our estimates suggest that uncertainty about forecasts is high. We find that the RBA's forecasts have substantial explanatory power for the inflation rate but not for GDP growth.

  3. Cost-estimating for commercial digital printing

    Science.gov (United States)

    Keif, Malcolm G.

    2007-01-01

    The purpose of this study is to document current cost-estimating practices used in commercial digital printing. A research study was conducted to determine the use of cost-estimating in commercial digital printing companies. This study answers the questions: 1) What methods are currently being used to estimate digital printing? 2) What is the relationship between estimating and pricing digital printing? 3) To what extent, if at all, do digital printers use full-absorption, all-inclusive hourly rates for estimating? Three different digital printing models were identified: 1) Traditional print providers, who supplement their offset presswork with digital printing for short-run color and versioned commercial print; 2) "Low-touch" print providers, who leverage the power of the Internet to streamline business transactions with digital storefronts; 3) Marketing solutions providers, who see printing less as a discrete manufacturing process and more as a component of a complete marketing campaign. Each model approaches estimating differently. Understanding and predicting costs can be extremely beneficial. Establishing a reliable system to estimate those costs can be somewhat challenging though. Unquestionably, cost-estimating digital printing will increase in relevance in the years ahead, as margins tighten and cost knowledge becomes increasingly more critical.

  4. Estimating Gender Wage Gaps: A Data Update

    Science.gov (United States)

    McDonald, Judith A.; Thornton, Robert J.

    2016-01-01

    In the authors' 2011 "JEE" article, "Estimating Gender Wage Gaps," they described an interesting class project that allowed students to estimate the current gender earnings gap for recent college graduates using data from the National Association of Colleges and Employers (NACE). Unfortunately, since 2012, NACE no longer…

  5. Regression Equations for Birth Weight Estimation using ...

    African Journals Online (AJOL)

    In this study, Birth Weight has been estimated from anthropometric measurements of hand and foot. Linear regression equations were formed from each of the measured variables. These simple equations can be used to estimate Birth Weight of new born babies, in order to identify those with low birth weight and referred to ...

  6. Estimating Loan-to-value Distributions

    DEFF Research Database (Denmark)

    Korteweg, Arthur; Sørensen, Morten

    2016-01-01

    We estimate a model of house prices, combined loan-to-value ratios (CLTVs) and trade and foreclosure behavior. House prices are only observed for traded properties and trades are endogenous, creating sample-selection problems for existing approaches to estimating CLTVs. We use a Bayesian filtering...

  7. MINIMUM VARIANCE BETA ESTIMATION WITH DYNAMIC CONSTRAINTS,

    Science.gov (United States)

    developed (at AFETR ) and is being used to isolate the primary error sources in the beta estimation task. This computer program is additionally used to...determine what success in beta estimation can be achieved with foreseeable instrumentation accuracies. Results are included that illustrate the effects on

  8. A method of estimating log weights.

    Science.gov (United States)

    Charles N. Mann; Hilton H. Lysons

    1972-01-01

    This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...

  9. MCMC estimation of multidimensional IRT models

    NARCIS (Netherlands)

    Beguin, Anton; Glas, Cornelis A.W.

    1998-01-01

    A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will

  10. Systematic Approach for Decommissioning Planning and Estimating

    International Nuclear Information System (INIS)

    Dam, A. S.

    2002-01-01

    Nuclear facility decommissioning, satisfactorily completed at the lowest cost, relies on a systematic approach to the planning, estimating, and documenting the work. High quality information is needed to properly perform the planning and estimating. A systematic approach to collecting and maintaining the needed information is recommended using a knowledgebase system for information management. A systematic approach is also recommended to develop the decommissioning plan, cost estimate and schedule. A probabilistic project cost and schedule risk analysis is included as part of the planning process. The entire effort is performed by a experienced team of decommissioning planners, cost estimators, schedulers, and facility knowledgeable owner representatives. The plant data, work plans, cost and schedule are entered into a knowledgebase. This systematic approach has been used successfully for decommissioning planning and cost estimating for a commercial nuclear power plant. Elements of this approach have been used for numerous cost estimates and estimate reviews. The plan and estimate in the knowledgebase should be a living document, updated periodically, to support decommissioning fund provisioning, with the plan ready for use when the need arises

  11. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

  12. Introduction to quantum-state estimation

    CERN Document Server

    Teo, Yong Siah

    2016-01-01

    Quantum-state estimation is an important field in quantum information theory that deals with the characterization of states of affairs for quantum sources. This book begins with background formalism in estimation theory to establish the necessary prerequisites. This basic understanding allows us to explore popular likelihood- and entropy-related estimation schemes that are suitable for an introductory survey on the subject. Discussions on practical aspects of quantum-state estimation ensue, with emphasis on the evaluation of tomographic performances for estimation schemes, experimental realizations of quantum measurements and detection of single-mode multi-photon sources. Finally, the concepts of phase-space distribution functions, which compatibly describe these multi-photon sources, are introduced to bridge the gap between discrete and continuous quantum degrees of freedom. This book is intended to serve as an instructive and self-contained medium for advanced undergraduate and postgraduate students to gra...

  13. Modified Weighted Kaplan-Meier Estimator

    Directory of Open Access Journals (Sweden)

    Mohammad Shafiq

    2007-01-01

    Full Text Available In many medical studies majority of the study subjects do not reach to the event of interest during the study period. In such situations survival probabilities can be estimated for censored observation by Kaplan Meier estimator. However in case of heavy censoring these estimates are biased and over estimate the survival probabilities. For heavy censoring a new method was proposed (Bahrawar Jan, 2005 to estimate the survival probabilities by weighting the censored observations by non-censoring rate. But the main defect in this weighted method is that it gives zero weight to the last censored observation. To over come this difficulty a new weight is proposed which also gives a non-zero weight to the last censored observation.

  14. Nonparametric Collective Spectral Density Estimation and Clustering

    KAUST Repository

    Maadooliat, Mehdi

    2017-04-12

    In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at

  15. A Developed ESPRIT Algorithm for DOA Estimation

    Science.gov (United States)

    Fayad, Youssef; Wang, Caiyun; Cao, Qunsheng; Hafez, Alaa El-Din Sayed

    2015-05-01

    A novel algorithm for estimating direction of arrival (DOAE) for target, which aspires to contribute to increase the estimation process accuracy and decrease the calculation costs, has been carried out. It has introduced time and space multiresolution in Estimation of Signal Parameter via Rotation Invariance Techniques (ESPRIT) method (TS-ESPRIT) to realize subspace approach that decreases errors caused by the model's nonlinearity effect. The efficacy of the proposed algorithm is verified by using Monte Carlo simulation, the DOAE accuracy has evaluated by closed-form Cramér-Rao bound (CRB) which reveals that the proposed algorithm's estimated results are better than those of the normal ESPRIT methods leading to the estimator performance enhancement.

  16. Another look at the Grubbs estimators

    KAUST Repository

    Lombard, F.

    2012-01-01

    We consider estimation of the precision of a measuring instrument without the benefit of replicate observations on heterogeneous sampling units. Grubbs (1948) proposed an estimator which involves the use of a second measuring instrument, resulting in a pair of observations on each sampling unit. Since the precisions of the two measuring instruments are generally different, these observations cannot be treated as replicates. Very large sample sizes are often required if the standard error of the estimate is to be within reasonable bounds and if negative precision estimates are to be avoided. We show that the two instrument Grubbs estimator can be improved considerably if fairly reliable preliminary information regarding the ratio of sampling unit variance to instrument variance is available. Our results are presented in the context of the evaluation of on-line analyzers. A data set from an analyzer evaluation is used to illustrate the methodology. © 2011 Elsevier B.V.

  17. Self-estimates of attention performance

    Directory of Open Access Journals (Sweden)

    CHRISTOPH MENGELKAMP

    2007-09-01

    Full Text Available In research on self-estimated IQ, gender differences are often found. The present study investigates whether these findings are true for self-estimation of attention, too. A sample of 100 female and 34 male students were asked to fill in the test of attention d2. After taking the test, the students estimated their results in comparison to their fellow students. The results show that the students underestimate their percent rank compared with the actual percent rank they achieved in the test, but estimate their rank order fairly accurately. Moreover, males estimate their performance distinctly higher than females do. This last result remains true even when the real test score is statistically controlled. The results are discussed with regard to research on positive illusions and gender stereotypes.

  18. Cost-estimating relationships for space programs

    Science.gov (United States)

    Mandell, Humboldt C., Jr.

    1992-01-01

    Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.

  19. Nonparametric Collective Spectral Density Estimation and Clustering

    KAUST Repository

    Maadooliat, Mehdi; Sun, Ying; Chen, Tianbo

    2017-01-01

    In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at

  20. COST ESTIMATING RELATIONSHIPS IN ONSHORE DRILLING PROJECTS

    Directory of Open Access Journals (Sweden)

    Ricardo de Melo e Silva Accioly

    2017-03-01

    Full Text Available Cost estimating relationships (CERs are very important tools in the planning phases of an upstream project. CERs are, in general, multiple regression models developed to estimate the cost of a particular item or scope of a project. They are based in historical data that should pass through a normalization process before fitting a model. In the early phases they are the primary tool for cost estimating. In later phases they are usually used as an estimation validation tool and sometimes for benchmarking purposes. As in any other modeling methodology there are number of important steps to build a model. In this paper the process of building a CER to estimate drilling cost of onshore wells will be addressed.