WorldWideScience

Sample records for ir-based estimation technique

  1. Recent Progress on the Second Generation CMORPH: LEO-IR Based Precipitation Estimates and Cloud Motion Vector

    Science.gov (United States)

    Xie, Pingping; Joyce, Robert; Wu, Shaorong

    2015-04-01

    cloud motion vectors from the GEO/LEO IR based precipitation estimates and the CFS Reanalysis (CFSR) precipitation fields. Motion vectors are first derived separately from the satellite IR based precipitation estimates and the CFSR precipitation fields. These individually derived motion vectors are then combined through a 2D-VAR technique to form an analyzed field of cloud motion vectors over the entire globe. Error function is experimented to best reflect the performance of the satellite IR based estimates and the CFSR in capturing the movements of precipitating cloud systems over different regions and for different seasons. Quantitative experiments are conducted to optimize the LEO IR based precipitation estimation technique and the 2D-VAR based motion vector analysis system. Detailed results will be reported at the EGU.

  2. CHANNEL ESTIMATION TECHNIQUE

    DEFF Research Database (Denmark)

    2015-01-01

    A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the communicat......A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over...... the communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...... filter characteristics of at least one known transceiver filter arranged in the communication channel....

  3. Bayesian techniques for surface fuel loading estimation

    Science.gov (United States)

    Kathy Gray; Robert Keane; Ryan Karpisz; Alyssa Pedersen; Rick Brown; Taylor Russell

    2016-01-01

    A study by Keane and Gray (2013) compared three sampling techniques for estimating surface fine woody fuels. Known amounts of fine woody fuel were distributed on a parking lot, and researchers estimated the loadings using different sampling techniques. An important result was that precise estimates of biomass required intensive sampling for both the planar intercept...

  4. Spectral Estimation by the Random Dec Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, Jacob L.; Krenk, Steen

    1990-01-01

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  5. Spectral Estimation by the Random DEC Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, J. Laigaard; Krenk, S.

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  6. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  7. Learning curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on actuarial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  8. Cost analysis and estimating tools and techniques

    CERN Document Server

    Nussbaum, Daniel

    1990-01-01

    Changes in production processes reflect the technological advances permeat­ ing our products and services. U. S. industry is modernizing and automating. In parallel, direct labor is fading as the primary cost driver while engineering and technology related cost elements loom ever larger. Traditional, labor-based ap­ proaches to estimating costs are losing their relevance. Old methods require aug­ mentation with new estimating tools and techniques that capture the emerging environment. This volume represents one of many responses to this challenge by the cost analysis profession. The Institute of Cost Analysis (lCA) is dedicated to improving the effective­ ness of cost and price analysis and enhancing the professional competence of its members. We encourage and promote exchange of research findings and appli­ cations between the academic community and cost professionals in industry and government. The 1990 National Meeting in Los Angeles, jointly spo~sored by ICA and the National Estimating Society (NES),...

  9. Population estimation techniques for routing analysis

    International Nuclear Information System (INIS)

    Sathisan, S.K.; Chagari, A.K.

    1994-01-01

    A number of on-site and off-site factors affect the potential siting of a radioactive materials repository at Yucca Mountain, Nevada. Transportation related issues such route selection and design are among them. These involve evaluation of potential risks and impacts, including those related to population. Population characteristics (total population and density) are critical factors in the risk assessment, emergency preparedness and response planning, and ultimately in route designation. This paper presents an application of Geographic Information System (GIS) technology to facilitate such analyses. Specifically, techniques to estimate critical population information are presented. A case study using the highway network in Nevada is used to illustrate the analyses. TIGER coverages are used as the basis for population information at a block level. The data are then synthesized at tract, county and state levels of aggregation. Of particular interest are population estimates for various corridor widths along transport corridors -- ranging from 0.5 miles to 20 miles in this paper. A sensitivity analysis based on the level of data aggregation is also presented. The results of these analysis indicate that specific characteristics of the area and its population could be used as indicators to aggregate data appropriately for the analysis

  10. Adaptive Response Surface Techniques in Reliability Estimation

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard

    1993-01-01

    Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces...

  11. Two biased estimation techniques in linear regression: Application to aircraft

    Science.gov (United States)

    Klein, Vladislav

    1988-01-01

    Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.

  12. Tracer techniques in estimating nuclear materials holdup

    International Nuclear Information System (INIS)

    Pillay, K.K.S.

    1987-01-01

    Residual inventory of nuclear materials remaining in processing facilities (holdup) is recognized as an insidious problem for safety of plant operations and safeguarding of special nuclear materials (SNM). This paper reports on an experimental study where a well-known method of radioanalytical chemistry, namely tracer technique, was successfully used to improve nondestructive measurements of holdup of nuclear materials in a variety of plant equipment. Such controlled measurements can improve the sensitivity of measurements of residual inventories of nuclear materials in process equipment by several orders of magnitude and the good quality data obtained lend themselves to developing mathematical models of holdup of SNM during stable plant operations

  13. Parameter estimation techniques for LTP system identification

    Science.gov (United States)

    Nofrarias Serra, Miquel

    LISA Pathfinder (LPF) is the precursor mission of LISA (Laser Interferometer Space Antenna) and the first step towards gravitational waves detection in space. The main instrument onboard the mission is the LTP (LISA Technology Package) whose scientific goal is to test LISA's drag-free control loop by reaching a differential acceleration noise level between two masses in √ geodesic motion of 3 × 10-14 ms-2 / Hz in the milliHertz band. The mission is not only challenging in terms of technology readiness but also in terms of data analysis. As with any gravitational wave detector, attaining the instrument performance goals will require an extensive noise hunting campaign to measure all contributions with high accuracy. But, opposite to on-ground experiments, LTP characterisation will be only possible by setting parameters via telecommands and getting a selected amount of information through the available telemetry downlink. These two conditions, high accuracy and high reliability, are the main restrictions that the LTP data analysis must overcome. A dedicated object oriented Matlab Toolbox (LTPDA) has been set up by the LTP analysis team for this purpose. Among the different toolbox methods, an essential part for the mission are the parameter estimation tools that will be used for system identification during operations: Linear Least Squares, Non-linear Least Squares and Monte Carlo Markov Chain methods have been implemented as LTPDA methods. The data analysis team has been testing those methods with a series of mock data exercises with the following objectives: to cross-check parameter estimation methods and compare the achievable accuracy for each of them, and to develop the best strategies to describe the physics underlying a complex controlled experiment as the LTP. In this contribution we describe how these methods were tested with simulated LTP-like data to recover the parameters of the model and we report on the latest results of these mock data exercises.

  14. Dosimetry techniques applied to thermoluminescent age estimation

    International Nuclear Information System (INIS)

    Erramli, H.

    1986-12-01

    The reliability and the ease of the field application of the measuring techniques of natural radioactivity dosimetry are studied. The natural radioactivity in minerals in composed of the internal dose deposited by alpha and beta radiations issued from the sample itself and the external dose deposited by gamma and cosmic radiations issued from the surroundings of the sample. Two technics for external dosimetry are examined in details. TL Dosimetry and field gamma dosimetry. Calibration and experimental conditions are presented. A new integrated dosimetric method for internal and external dose measure is proposed: the TL dosimeter is placed in the soil in exactly the same conditions as the sample ones, during a time long enough for the total dose evaluation [fr

  15. Radon emanometric technique for 226Ra estimation

    International Nuclear Information System (INIS)

    Mandakini Maharana; Sengupta, D.; Eappen, K.P.

    2010-01-01

    Studies on natural background radiation show that the major contribution of radiation dose received by population is through inhalation pathway vis-a-vis contribution from radon ( 222 Rn) gas. The immediate parent of radon being radium ( 226 Ra), it is imperative that radium content is measured in the various matrices that are present in the environment. Among the various methods available for the measurement of radium, gamma spectrometry and radiochemical method are the two extensively used measurement methods. In comparison with these two methods, the radon emanometric technique, described here, is a simple and convenient method. The paper gives details of sample processing, radon bubbler, Lucas cell and the methodology used in the emanometric method. Comparison of emanometric method with gamma spectrometry has also undertaken and the results for a few soil samples are given. The results show a fairly good agreement among the two methods. (author)

  16. COMPARISON OF RECURSIVE ESTIMATION TECHNIQUES FOR POSITION TRACKING RADIOACTIVE SOURCES

    International Nuclear Information System (INIS)

    Muske, K.; Howse, J.

    2000-01-01

    This paper compares the performance of recursive state estimation techniques for tracking the physical location of a radioactive source within a room based on radiation measurements obtained from a series of detectors at fixed locations. Specifically, the extended Kalman filter, algebraic observer, and nonlinear least squares techniques are investigated. The results of this study indicate that recursive least squares estimation significantly outperforms the other techniques due to the severe model nonlinearity

  17. Labelled antibody techniques in glycoprotein estimation

    International Nuclear Information System (INIS)

    Hazra, D.K.; Ekins, R.P.; Edwards, R.; Williams, E.S.

    1977-01-01

    The problems in the radioimmunoassay of the glycoprotein hormones (pituitary LH, FSH and TSH and human chlorionic gonadotrophin HGG) are reviewed viz: limited specificity and sensitivity in the clinical context, interpretation of disparity between bioassay and radioimmunoassay, and interlaboratory variability. The advantages and limitations of the labelled antibody techniques - classical immonoradiometric methods and 2-site or 125 I-anti-IgG indirect labelling modifications are reviewed in general, and their theoretical potential in glycoprotein assays examined in the light of previous work. Preliminary experiments in the development of coated tube 2-site assay for glycoproteins using 125 I anti-IgG labelling are described, including conditions for maximizing solid phase extraction of the antigen, iodination of anti-IgG, and assay conditions such as effects of temperature of incubation with antigen 'hormonefree serum', heterologous serum and detergent washing. Experiments with extraction and antigen-specific antisera raised in the same or different species are described as exemplified by LH and TSH assay systems, the latter apparently promising greater sensitivity than radioimmunoassay. Proposed experimental and mathematical optimisation and validation of the method as an assay system is outlined, and the areas for further work delineated. (orig.) [de

  18. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  19. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jacob Laigaard

    1991-01-01

    responses simulated by two SDOF ARMA models loaded by the same band-limited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  20. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    1992-01-01

    responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  1. Power system dynamic state estimation using prediction based evolutionary technique

    International Nuclear Information System (INIS)

    Basetti, Vedik; Chandel, Ashwani K.; Chandel, Rajeevan

    2016-01-01

    In this paper, a new robust LWS (least winsorized square) estimator is proposed for dynamic state estimation of a power system. One of the main advantages of this estimator is that it has an inbuilt bad data rejection property and is less sensitive to bad data measurements. In the proposed approach, Brown's double exponential smoothing technique has been utilised for its reliable performance at the prediction step. The state estimation problem is solved as an optimisation problem using a new jDE-self adaptive differential evolution with prediction based population re-initialisation technique at the filtering step. This new stochastic search technique has been embedded with different state scenarios using the predicted state. The effectiveness of the proposed LWS technique is validated under different conditions, namely normal operation, bad data, sudden load change, and loss of transmission line conditions on three different IEEE test bus systems. The performance of the proposed approach is compared with the conventional extended Kalman filter. On the basis of various performance indices, the results thus obtained show that the proposed technique increases the accuracy and robustness of power system dynamic state estimation performance. - Highlights: • To estimate the states of the power system under dynamic environment. • The performance of the EKF method is degraded during anomaly conditions. • The proposed method remains robust towards anomalies. • The proposed method provides precise state estimates even in the presence of anomalies. • The results show that prediction accuracy is enhanced by using the proposed model.

  2. Estimate-Merge-Technique-based algorithms to track an underwater ...

    Indian Academy of Sciences (India)

    D V A N Ravi Kumar

    2017-07-04

    Jul 4, 2017 ... In this paper, two novel methods based on the Estimate Merge Technique ... mentioned advantages of the proposed novel methods is shown by carrying out Monte Carlo simulation in .... equations are converted to sequential equations to make ... estimation error and low convergence time) at feasibly high.

  3. Evaluation of mfcc estimation techniques for music similarity

    DEFF Research Database (Denmark)

    Jensen, Jesper Højvang; Christensen, Mads Græsbøll; Murthi, Manohar

    2006-01-01

    Spectral envelope parameters in the form of mel-frequencycepstral coefficients are often used for capturing timbral information of music signals in connection with genre classification applications. In this paper, we evaluate mel-frequencycepstral coefficient (MFCC) estimation techniques, namely...... independent linear prediction and MVDR spectral estimators did not exhibit any statistically significant improvement over MFCCs based on the simpler FFT....

  4. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  5. Quantitative CT: technique dependence of volume estimation on pulmonary nodules

    Science.gov (United States)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan

    2012-03-01

    Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.

  6. Congestion estimation technique in the optical network unit registration process.

    Science.gov (United States)

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  7. Minimum K-S estimator using PH-transform technique

    Directory of Open Access Journals (Sweden)

    Somchit Boonthiem

    2016-07-01

    Full Text Available In this paper, we propose an improvement of the Minimum Kolmogorov-Smirnov (K-S estimator using proportional hazards transform (PH-transform technique. The data of experiment is 47 fire accidents data of an insurance company in Thailand. This experiment has two operations, the first operation, we minimize K-S statistic value using grid search technique for nine distributions; Rayleigh distribution, gamma distribution, Pareto distribution, log-logistic distribution, logistic distribution, normal distribution, Weibull distribution, lognormal distribution, and exponential distribution and the second operation, we improve K-S statistic using PHtransform. The result appears that PH-transform technique can improve the Minimum K-S estimator. The algorithms give better the Minimum K-S estimator for seven distributions; Rayleigh distribution, logistic distribution, gamma distribution, Pareto distribution, log-logistic distribution, normal distribution, Weibull distribution, log-normal distribution, and exponential distribution while the Minimum K-S estimators of normal distribution and logistic distribution are unchanged

  8. A new estimation technique of sovereign default risk

    Directory of Open Access Journals (Sweden)

    Mehmet Ali Soytaş

    2016-12-01

    Full Text Available Using the fixed-point theorem, sovereign default models are solved by numerical value function iteration and calibration methods, which due to their computational constraints, greatly limits the models' quantitative performance and foregoes its country-specific quantitative projection ability. By applying the Hotz-Miller estimation technique (Hotz and Miller, 1993- often used in applied microeconometrics literature- to dynamic general equilibrium models of sovereign default, one can estimate the ex-ante default probability of economies, given the structural parameter values obtained from country-specific business-cycle statistics and relevant literature. Thus, with this technique we offer an alternative solution method to dynamic general equilibrium models of sovereign default to improve upon their quantitative inference ability.

  9. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.

    2010-11-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine the location of the source using the direct and the relayed signal at the destination. We derive the Cramer-Rao lower bound (CRLB) expressions separately for x and y coordinates of the location estimate. We analyze the effects of cognitive behaviour of the relay on the performance of the proposed method. We also discuss and quantify the reliability of the location estimate using the proposed technique if the source is not stationary. The overall performance of the proposed method is presented through simulations. ©2010 IEEE.

  10. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  11. IR-based spot weld NDT in automotive applications

    Science.gov (United States)

    Chen, Jian; Feng, Zhili

    2015-05-01

    Today's auto industry primarily relies on destructive teardown evaluation to ensure the quality of the resistance spot welds (RSWs) due to their criticality in crash resistance and performance of vehicles. The destructive teardown evaluation is labor intensive and costly. The very nature of the destructive test means only a few selected welds will be sampled for quality. Most of the welds in a car are never checked. There are significant costs and risks associated with reworking and scrapping the defective welded parts made between the teardown tests. IR thermography as a non-destructive testing (NDT) tool has its distinct advantage — its non-intrusive and non-contact nature. This makes the IR based NDT especially attractive for the highly automated assembly lines. IR for weld quality inspection has been explored in the past, mostly limited to the offline post-processing manner in a laboratory environment. No online real-time RSW inspection using IR thermography has been reported. Typically for postprocessing inspection, a short-pulse heating via xenon flash lamp light (in a few milliseconds) is applied to the surface of a spot weld. However, applications in the auto industry have been unsuccessful, largely due to a critical drawback that cannot be implemented in the high-volume production line - the prerequisite of painting the weld surface to eliminate surface reflection and other environmental interference. This is due to the low signal-to-noise ratio resulting from the low/unknown surface emissivity and the very small temperature changes (typically on the order of 0.1°C) induced by the flash lamp method. An integrated approach consisting of innovations in both data analysis algorithms and hardware apparatus that effectively solved the key technical barriers for IR NDT. The system can be used for both real-time (during welding) and post-processing inspections (after welds have been made). First, we developed a special IR thermal image processing method that

  12. Estimation of fatigue life using electromechanical impedance technique

    Science.gov (United States)

    Lim, Yee Yan; Soh, Chee Kiong

    2010-04-01

    Fatigue induced damage is often progressive and gradual in nature. Structures subjected to large number of fatigue load cycles will encounter the process of progressive crack initiation, propagation and finally fracture. Monitoring of structural health, especially for the critical components, is therefore essential for early detection of potential harmful crack. Recent advent of smart materials such as piezo-impedance transducer adopting the electromechanical impedance (EMI) technique and wave propagation technique are well proven to be effective in incipient damage detection and characterization. Exceptional advantages such as autonomous, real-time and online, remote monitoring may provide a cost-effective alternative to the conventional structural health monitoring (SHM) techniques. In this study, the main focus is to investigate the feasibility of characterizing a propagating fatigue crack in a structure using the EMI technique as well as estimating its remaining fatigue life using the linear elastic fracture mechanics (LEFM) approach. Uniaxial cyclic tensile load is applied on a lab-sized aluminum beam up to failure. Progressive shift in admittance signatures measured by the piezo-impedance transducer (PZT patch) corresponding to increase of loading cycles reflects effectiveness of the EMI technique in tracing the process of fatigue damage progression. With the use of LEFM, prediction of the remaining life of the structure at different cycles of loading is possible.

  13. Learning-curve estimation techniques for nuclear industry

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on acturial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year.

  14. Learning curve estimation techniques for the nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on actuarial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  15. Learning-curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on acturial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  16. Sound Power Estimation by Laser Doppler Vibration Measurement Techniques

    Directory of Open Access Journals (Sweden)

    G.M. Revel

    1998-01-01

    Full Text Available The aim of this paper is to propose simple and quick methods for the determination of the sound power emitted by a vibrating surface, by using non-contact vibration measurement techniques. In order to calculate the acoustic power by vibration data processing, two different approaches are presented. The first is based on the method proposed in the Standard ISO/TR 7849, while the second is based on the superposition theorem. A laser-Doppler scanning vibrometer has been employed for vibration measurements. Laser techniques open up new possibilities in this field because of their high spatial resolution and their non-intrusivity. The technique has been applied here to estimate the acoustic power emitted by a loudspeaker diaphragm. Results have been compared with those from a commercial Boundary Element Method (BEM software and experimentally validated by acoustic intensity measurements. Predicted and experimental results seem to be in agreement (differences lower than 1 dB thus showing that the proposed techniques can be employed as rapid solutions for many practical and industrial applications. Uncertainty sources are addressed and their effect is discussed.

  17. ESTIMATION OF INSULATOR CONTAMINATIONS BY MEANS OF REMOTE SENSING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    G. Han

    2016-06-01

    Full Text Available The accurate estimation of deposits adhering on insulators is critical to prevent pollution flashovers which cause huge costs worldwide. The traditional evaluation method of insulator contaminations (IC is based sparse manual in-situ measurements, resulting in insufficient spatial representativeness and poor timeliness. Filling that gap, we proposed a novel evaluation framework of IC based on remote sensing and data mining. Varieties of products derived from satellite data, such as aerosol optical depth (AOD, digital elevation model (DEM, land use and land cover and normalized difference vegetation index were obtained to estimate the severity of IC along with the necessary field investigation inventory (pollution sources, ambient atmosphere and meteorological data. Rough set theory was utilized to minimize input sets under the prerequisite that the resultant set is equivalent to the full sets in terms of the decision ability to distinguish severity levels of IC. We found that AOD, the strength of pollution source and the precipitation are the top 3 decisive factors to estimate insulator contaminations. On that basis, different classification algorithm such as mahalanobis minimum distance, support vector machine (SVM and maximum likelihood method were utilized to estimate severity levels of IC. 10-fold cross-validation was carried out to evaluate the performances of different methods. SVM yielded the best overall accuracy among three algorithms. An overall accuracy of more than 70% was witnessed, suggesting a promising application of remote sensing in power maintenance. To our knowledge, this is the first trial to introduce remote sensing and relevant data analysis technique into the estimation of electrical insulator contaminations.

  18. A low tritium hydride bed inventory estimation technique

    Energy Technology Data Exchange (ETDEWEB)

    Klein, J.E.; Shanahan, K.L.; Baker, R.A. [Savannah River National Laboratory, Aiken, SC (United States); Foster, P.J. [Savannah River Nuclear Solutions, Aiken, SC (United States)

    2015-03-15

    Low tritium hydride beds were developed and deployed into tritium service in Savannah River Site. Process beds to be used for low concentration tritium gas were not fitted with instrumentation to perform the steady-state, flowing gas calorimetric inventory measurement method. Low tritium beds contain less than the detection limit of the IBA (In-Bed Accountability) technique used for tritium inventory. This paper describes two techniques for estimating tritium content and uncertainty for low tritium content beds to be used in the facility's physical inventory (PI). PI are performed periodically to assess the quantity of nuclear material used in a facility. The first approach (Mid-point approximation method - MPA) assumes the bed is half-full and uses a gas composition measurement to estimate the tritium inventory and uncertainty. The second approach utilizes the bed's hydride material pressure-composition-temperature (PCT) properties and a gas composition measurement to reduce the uncertainty in the calculated bed inventory.

  19. Estimation of Alpine Skier Posture Using Machine Learning Techniques

    Directory of Open Access Journals (Sweden)

    Bojan Nemec

    2014-10-01

    Full Text Available High precision Global Navigation Satellite System (GNSS measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier’s neck. A key issue is how to estimate other more relevant parameters of the skier’s body, like the center of mass (COM and ski trajectories. Previously, these parameters were estimated by modeling the skier’s body with an inverted-pendulum model that oversimplified the skier’s body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier’s body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing.

  20. Using support vector machines in the multivariate state estimation technique

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Gross, K.C.

    1999-01-01

    One approach to validate nuclear power plant (NPP) signals makes use of pattern recognition techniques. This approach often assumes that there is a set of signal prototypes that are continuously compared with the actual sensor signals. These signal prototypes are often computed based on empirical models with little or no knowledge about physical processes. A common problem of all data-based models is their limited ability to make predictions on the basis of available training data. Another problem is related to suboptimal training algorithms. Both of these potential shortcomings with conventional approaches to signal validation and sensor operability validation are successfully resolved by adopting a recently proposed learning paradigm called the support vector machine (SVM). The work presented here is a novel application of SVM for data-based modeling of system state variables in an NPP, integrated with a nonlinear, nonparametric technique called the multivariate state estimation technique (MSET), an algorithm developed at Argonne National Laboratory for a wide range of nuclear plant applications

  1. Republic of Georgia estimates for prevalence of drug use: Randomized response techniques suggest under-estimation.

    Science.gov (United States)

    Kirtadze, Irma; Otiashvili, David; Tabatadze, Mzia; Vardanashvili, Irina; Sturua, Lela; Zabransky, Tomas; Anthony, James C

    2018-06-01

    Validity of responses in surveys is an important research concern, especially in emerging market economies where surveys in the general population are a novelty, and the level of social control is traditionally higher. The Randomized Response Technique (RRT) can be used as a check on response validity when the study aim is to estimate population prevalence of drug experiences and other socially sensitive and/or illegal behaviors. To apply RRT and to study potential under-reporting of drug use in a nation-scale, population-based general population survey of alcohol and other drug use. For this first-ever household survey on addictive substances for the Country of Georgia, we used the multi-stage probability sampling of 18-to-64-year-old household residents of 111 urban and 49 rural areas. During the interviewer-administered assessments, RRT involved pairing of sensitive and non-sensitive questions about drug experiences. Based upon the standard household self-report survey estimate, an estimated 17.3% [95% confidence interval, CI: 15.5%, 19.1%] of Georgian household residents have tried cannabis. The corresponding RRT estimate was 29.9% [95% CI: 24.9%, 34.9%]. The RRT estimates for other drugs such as heroin also were larger than the standard self-report estimates. We remain unsure about what is the "true" value for prevalence of using illegal psychotropic drugs in the Republic of Georgia study population. Our RRT results suggest that standard non-RRT approaches might produce 'under-estimates' or at best, highly conservative, lower-end estimates. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Estimating the Celestial Reference Frame via Intra-Technique Combination

    Science.gov (United States)

    Iddink, Andreas; Artz, Thomas; Halsig, Sebastian; Nothnagel, Axel

    2016-12-01

    One of the primary goals of Very Long Baseline Interferometry (VLBI) is the determination of the International Celestial Reference Frame (ICRF). Currently the third realization of the internationally adopted CRF, the ICRF3, is under preparation. In this process, various optimizations are planned to realize a CRF that does not benefit only from the increased number of observations since the ICRF2 was published. The new ICRF can also benefit from an intra-technique combination as is done for the Terrestrial Reference Frame (TRF). Here, we aim at estimating an optimized CRF by means of an intra-technique combination. The solutions are based on the input to the official combined product of the International VLBI Service for Geodesy and Astrometry (IVS), also providing the radio source parameters. We discuss the differences in the setup using a different number of contributions and investigate the impact on TRF and CRF as well as on the Earth Orientation Parameters (EOPs). Here, we investigate the differences between the combined CRF and the individual CRFs from the different analysis centers.

  3. Project cost estimation techniques used by most emerging building ...

    African Journals Online (AJOL)

    Keywords: Cost estimation, estimation methods, emerging contractors, tender. Dr Solly Matshonisa .... historical cost data (data from cost accounting records and/ ..... emerging contractors in tendering. Table 13: Use of project risk management versus responsibility: expected. Internal document analysis. Checklist analysis.

  4. NO Reactions Over Ir-Based Catalysts in the Presence of O2

    Directory of Open Access Journals (Sweden)

    Mingxin Guo

    2011-01-01

    Full Text Available The behaviour of a series of Ir-based catalysts supported on SiO2, ZSM-5 and γ-Al2O3 with various Ir loadings prepared by impregnation method was conducted by temperature programmed reaction (TPR technique. The result implies that NO is oxidized to NO2 while simultaneously being reduced to N2 or N2O in the NO reactions over iridium catalysts. The surface active phase over iridium catalysts that promote the NO reactions is IrO2. The catalytic activity increases with the increase of the Ir loading and support materials have a little effect on the catalytic activity. When the loading is less than 0.1%, the catalytic activity was found to be dependent on the nature of support materials and in order: Ir/ZSM-5>Ir/γ-Al2O3>Ir/SiO2. When the loading is higher than 0.1%, the catalytic activity for NO oxidation is in order: Ir/ZSM-5>Ir/SiO2>Ir/γ -Al2O3, which is correlated with Ir dispersion on the surface of support materials and the catalytic activity for NO reduction is in sequence: Ir/γ -Al2O3>Ir/SiO2>Ir/ZSM-5, which is attributed to the adsorbed-dissociation of NO2. Compared to Pt/γ-Al2O3, Ir/γ-Al2O3 catalyst is more benefit for the NO reduction.

  5. Estimation of Correlation Functions by the Random DEC Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    The Random Dec Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the most important properties of the technique is given. The review is mainly based on recently achieved results that are still unpublished, or that has just...

  6. A comparison of small-area estimation techniques to estimate selected stand attributes using LiDAR-derived auxiliary variables

    Science.gov (United States)

    Michael E. Goerndt; Vicente J. Monleon; Hailemariam. Temesgen

    2011-01-01

    One of the challenges often faced in forestry is the estimation of forest attributes for smaller areas of interest within a larger population. Small-area estimation (SAE) is a set of techniques well suited to estimation of forest attributes for small areas in which the existing sample size is small and auxiliary information is available. Selected SAE methods were...

  7. Nonlinear Filtering Techniques Comparison for Battery State Estimation

    Directory of Open Access Journals (Sweden)

    Aspasia Papazoglou

    2014-09-01

    Full Text Available The performance of estimation algorithms is vital for the correct functioning of batteries in electric vehicles, as poor estimates will inevitably jeopardize the operations that rely on un-measurable quantities, such as State of Charge and State of Health. This paper compares the performance of three nonlinear estimation algorithms: the Extended Kalman Filter, the Unscented Kalman Filter and the Particle Filter, where a lithium-ion cell model is considered. The effectiveness of these algorithms is measured by their ability to produce accurate estimates against their computational complexity in terms of number of operations and execution time required. The trade-offs between estimators' performance and their computational complexity are analyzed.

  8. A new Bayesian recursive technique for parameter estimation

    Science.gov (United States)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  9. Estimating monthly temperature using point based interpolation techniques

    Science.gov (United States)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  10. A technique for estimating maximum harvesting effort in a stochastic ...

    Indian Academy of Sciences (India)

    Unknown

    Estimation of maximum harvesting effort has a great impact on the ... fluctuating environment has been developed in a two-species competitive system, which shows that under realistic .... The existence and local stability properties of the equi-.

  11. Costs of regulatory compliance: categories and estimating techniques

    International Nuclear Information System (INIS)

    Schulte, S.C.; McDonald, C.L.; Wood, M.T.; Cole, R.M.; Hauschulz, K.

    1978-10-01

    Use of the categorization scheme and cost estimating approaches presented in this report can make cost estimates of regulation required compliance activities of value to policy makers. The report describes a uniform assessment framework that when used would assure that cost studies are generated on an equivalent basis. Such normalization would make comparisons of different compliance activity cost estimates more meaningful, thus enabling the relative merits of different regulatory options to be more effectively judged. The framework establishes uniform cost reporting accounts and cost estimating approaches for use in assessing the costs of complying with regulatory actions. The framework was specifically developed for use in a current study at Pacific Northwest Laboratory. However, use of the procedures for other applications is also appropriate

  12. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.; Hussain, Syed Imtiaz; Ç elebi, Hasari Burak; Abdallah, Mohamed M.; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine

  13. Parameter estimation in stochastic mammogram model by heuristic optimization techniques.

    NARCIS (Netherlands)

    Selvan, S.E.; Xavier, C.C.; Karssemeijer, N.; Sequeira, J.; Cherian, R.A.; Dhala, B.Y.

    2006-01-01

    The appearance of disproportionately large amounts of high-density breast parenchyma in mammograms has been found to be a strong indicator of the risk of developing breast cancer. Hence, the breast density model is popular for risk estimation or for monitoring breast density change in prevention or

  14. A Novel DOA Estimation Algorithm Using Array Rotation Technique

    Directory of Open Access Journals (Sweden)

    Xiaoyu Lan

    2014-03-01

    Full Text Available The performance of traditional direction of arrival (DOA estimation algorithm based on uniform circular array (UCA is constrained by the array aperture. Furthermore, the array requires more antenna elements than targets, which will increase the size and weight of the device and cause higher energy loss. In order to solve these issues, a novel low energy algorithm utilizing array base-line rotation for multiple targets estimation is proposed. By rotating two elements and setting a fixed time delay, even the number of elements is selected to form a virtual UCA. Then, the received data of signals will be sampled at multiple positions, which improves the array elements utilization greatly. 2D-DOA estimation of the rotation array is accomplished via multiple signal classification (MUSIC algorithms. Finally, the Cramer-Rao bound (CRB is derived and simulation results verified the effectiveness of the proposed algorithm with high resolution and estimation accuracy performance. Besides, because of the significant reduction of array elements number, the array antennas system is much simpler and less complex than traditional array.

  15. Indirect child mortality estimation technique to identify trends of ...

    African Journals Online (AJOL)

    Background: In sub-Saharan African countries, the chance of a child dying before the age of five years is high. The prob- ... of child birth and the age distribution of child mortal- ity11,12. ... value can be estimated from age-specific fertility rates.

  16. Fusion of neural computing and PLS techniques for load estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lu, M.; Xue, H.; Cheng, X. [Northwestern Polytechnical Univ., Xi' an (China); Zhang, W. [Xi' an Inst. of Post and Telecommunication, Xi' an (China)

    2007-07-01

    A method to predict the electric load of a power system in real time was presented. The method is based on neurocomputing and partial least squares (PLS). Short-term load forecasts for power systems are generally determined by conventional statistical methods and Computational Intelligence (CI) techniques such as neural computing. However, statistical modeling methods often require the input of questionable distributional assumptions, and neural computing is weak, particularly in determining topology. In order to overcome the problems associated with conventional techniques, the authors developed a CI hybrid model based on neural computation and PLS techniques. The theoretical foundation for the designed CI hybrid model was presented along with its application in a power system. The hybrid model is suitable for nonlinear modeling and latent structure extracting. It can automatically determine the optimal topology to maximize the generalization. The CI hybrid model provides faster convergence and better prediction results compared to the abductive networks model because it incorporates a load conversion technique as well as new transfer functions. In order to demonstrate the effectiveness of the hybrid model, load forecasting was performed on a data set obtained from the Puget Sound Power and Light Company. Compared with the abductive networks model, the CI hybrid model reduced the forecast error by 32.37 per cent on workday, and by an average of 27.18 per cent on the weekend. It was concluded that the CI hybrid model has a more powerful predictive ability. 7 refs., 1 tab., 3 figs.

  17. A comparison of spatial rainfall estimation techniques: A case study ...

    African Journals Online (AJOL)

    Two geostatistical interpolation techniques (kriging and cokriging) were evaluated against inverse distance weighted (IDW) and global polynomial interpolation (GPI). Of the four spatial interpolators, kriging and cokriging produced results with the least root mean square error (RMSE). A digital elevation model (DEM) was ...

  18. Metrological and reliable characteristics of transducers: estimation techniques

    International Nuclear Information System (INIS)

    Volkov, V.A.; Ryzhakov, V.V.

    1993-01-01

    Methods and techniques of finding the evaluations of metering method error dispersions by different factors, non-linearity of transformation functions, their hysteresis, as well as evaluations of full operating time of long-term use metering means, are presented. A program of static data computer processing is given. 65 refs

  19. DFT-based channel estimation and noise variance estimation techniques for single-carrier FDMA

    OpenAIRE

    Huang, G; Nix, AR; Armour, SMD

    2010-01-01

    Practical frequency domain equalization (FDE) systems generally require knowledge of the channel and the noise variance to equalize the received signal in a frequency-selective fading channel. Accurate channel estimate and noise variance estimate are thus desirable to improve receiver performance. In this paper we investigate the performance of the denoise channel estimator and the approximate linear minimum mean square error (A-LMMSE) channel estimator with channel power delay profile (PDP) ...

  20. Cost Estimation Techniques for C3I System Software.

    Science.gov (United States)

    1984-07-01

    opment manmonth have been determined for maxi, midi , and mini .1 type computers. Small to median size timeshared developments used 0.2 to 1.5 hours...development schedule 1.23 1.00 1.10 2.1.3 Detailed Model The final codification of the COCOMO regressions was the development of separate effort...regardless of the software structure level being estimated: D8VC -- the expected development computer (maxi. midi . mini, micro) MODE -- the expected

  1. Field Test of Gopher Tortoise (Gopherus Polyphemus) Population Estimation Techniques

    Science.gov (United States)

    2008-04-01

    Web (WWW) at URL: http://www.cecer.army.mil ERDC/CERL TR-08-7 4 2 Field Tests The gopher tortoise is a species of conservation concern in the... ncv D ⎛ ⎞⎛ ⎞ = ⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ (4) where: L = estimate of line length to be sampled b = dispersion parameter 2ˆ( )tcv D = desired coefficient of

  2. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    Science.gov (United States)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  3. New Solid Phases for Estimation of Hormones by Radioimmunoassay Technique

    International Nuclear Information System (INIS)

    Sheha, R.R.; Ayoub, H.S.M.; Shafik, M.

    2013-01-01

    The efforts in this study were initiated to develop and validate new solid phases for estimation of hormones by radioimmunoassay (RIA). The study argued the successful application of different hydroxy apatites (HAP) as new solid phases for estimation of Alpha fetoprotein (AFP), Thyroid Stimulating hormone (TSH) and Luteinizing hormone (LH) in human serum. Hydroxy apatites have different alkali earth elements were successfully prepared by a well-controlled co-precipitation method with stoichiometric ratio value 1.67. The synthesized barium and calcium hydroxy apatites were characterized using XRD and Ftir and data clarified the preparation of pure structures of both BaHAP and CaHAP with no evidence on presence of other additional phases. The prepared solid phases were applied in various radioimmunoassay systems for separation of bound and free antigens of AFP, TSH and LH hormones. The preparation of radiolabeled tracer for these antigens was carried out using chloramine-T as oxidizing agent. The influence of different parameters on the activation and coupling of the used apatite particles with the polyclonal antibodies was systematically investigated and the optimum conditions were determined. The assay was reproducible, specific and sensitive enough for regular estimation of the studied hormones. The intra-and inter-assay variation were satisfactory and also the recovery and dilution tests indicated an accurate calibration. The reliability of these apatite particles had been validated by comparing the results that obtained by using commercial kits. The results finally authenticates that hydroxyapatite particles would have a great potential to address the emerging challenge of accurate quantitation in laboratory medical application

  4. Comparison of sampling techniques for Bayesian parameter estimation

    Science.gov (United States)

    Allison, Rupert; Dunkley, Joanna

    2014-02-01

    The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.

  5. Comparative Study of Online Open Circuit Voltage Estimation Techniques for State of Charge Estimation of Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Hicham Chaoui

    2017-04-01

    Full Text Available Online estimation techniques are extensively used to determine the parameters of various uncertain dynamic systems. In this paper, online estimation of the open-circuit voltage (OCV of lithium-ion batteries is proposed by two different adaptive filtering methods (i.e., recursive least square, RLS, and least mean square, LMS, along with an adaptive observer. The proposed techniques use the battery’s terminal voltage and current to estimate the OCV, which is correlated to the state of charge (SOC. Experimental results highlight the effectiveness of the proposed methods in online estimation at different charge/discharge conditions and temperatures. The comparative study illustrates the advantages and limitations of each online estimation method.

  6. Development of flow injection analysis technique for uranium estimation

    International Nuclear Information System (INIS)

    Paranjape, A.H.; Pandit, S.S.; Shinde, S.S.; Ramanujam, A.; Dhumwad, R.K.

    1991-01-01

    Flow injection analysis is increasingly used as a process control analytical technique in many industries. It involves injection of the sample at a constant rate into a steady flowing stream of reagent and passing this mixture through a suitable detector. This paper describes the development of such a system for the analysis of uranium (VI) and (IV) and its gross gamma activity. It is amenable for on-line or automated off-line monitoring of uranium and its activity in process streams. The sample injection port is suitable for automated injection of radioactive samples. The performance of the system has been tested for the colorimetric response of U(VI) samples at 410 nm in the range of 35 to 360mg/ml in nitric acid medium using Metrohm 662 Photometer and a recorder as detector assembly. The precision of the method is found to be better than +/- 0.5%. This technique with certain modifications is used for the analysis of U(VI) in the range 0.1-3mg/ailq. by alcoholic thiocynate procedure within +/- 1.5% precision. Similarly the precision for the determination of U(IV) in the range 15-120 mg at 650 nm is found to be better than 5%. With NaI well-type detector in the flow line, the gross gamma counting of the solution under flow is found to be within a precision of +/- 5%. (author). 4 refs., 2 figs., 1 tab

  7. Comparison of techniques for estimating herbage intake by grazing dairy cows

    NARCIS (Netherlands)

    Smit, H.J.; Taweel, H.Z.; Tas, B.M.; Tamminga, S.; Elgersma, A.

    2005-01-01

    For estimating herbage intake during grazing, the traditional sward cutting technique was compared in grazing experiments in 2002 and 2003 with the recently developed n-alkanes technique and with the net energy method. The first method estimates herbage intake by the difference between the herbage

  8. . Estimating soil contamination from oil spill using neutron backscattering technique

    International Nuclear Information System (INIS)

    Okunade, I.O.; Jonah, S.A.; Abdulsalam, M.O.

    2009-01-01

    An analytical facility which is based on neutron backscattering technique has been adapted for monitoring oil spill. The facility which consists of 1 Ci Am-Be isotopic source and 3 He neutron detector is based on the principle of slowing down of neutrons in a given medium which is dominated by the elastic process with the hydrogen nucleus. Based on this principle, the neutron reflection parameter in the presence of hydrogenous materials such as coal, crude oil and other hydrocarbon materials depends strongly on the number of hydrogen nuclei present. Consequently, the facility has been adapted for quantification of crude oil in soil contaminated in this work. The description of the facility and analytical procedures for quantification of oil spill in soil contaminated with different amount of crude oil are provided

  9. Background estimation techniques in searches for heavy resonances at CMS

    CERN Document Server

    Benato, Lisa

    2017-01-01

    Many Beyond Standard Model theories foresee the existence of heavy resonances (over 1 TeV) decaying into final states that include a high-energetic, boosted jet and charged leptons or neutrinos. In these very peculiar conditions, Monte Carlo predictions are not reliable enough to reproduce accurately the expected Standard Model background. A data-Monte Carlo hybrid approach (alpha method) has been successfully adopted since Run 1 in searches for heavy Higgs bosons performed by the CMS Collaboration. By taking advantage of data in signal-free control regions, determined exploiting the boosted jet substructure, predictions are extracted in the signal region. The alpha method and jet substructure techniques are described in detail, along with some recent results obtained with 2016 Run 2 data collected by the CMS detector.

  10. Rumen microbial growth estimation using in vitro radiophosphorous incorporation technique

    International Nuclear Information System (INIS)

    Bueno, Ives Claudio da Silva; Machado, Mariana de Carvalho; Cabral Filho, Sergio Lucio Salomon; Gobbo, Sarita Priscila; Vitti, Dorinha Miriam Silber Schmidt; Abdalla, Adibe Luiz

    2002-01-01

    Rumen microorganisms are able to transform low biological value nitrogen of feed stuff into high quality protein. To determine how much microbial protein that process forms, radiomarkers can be used. Radiophosphorous has been used to mark microbial protein, as element P is present in all rumen microorganisms (as phospholipids) and the P:N ratio of rumen biomass is quite constant. The aim of this work was to estimate microbial synthesis from feedstuff commonly used in ruminant nutrition in Brazil. Tested feeds were fresh alfalfa, raw sugarcane bagasse, rice hulls, rice meal, soybean meal, wheat meal, Tifton hay, leucaena, dehydrated citrus pulp, wet brewers' grains and cottonseed meal. 32 P-labelled phosphate solution was used as marker for microbial protein. Results showed the diversity of feeds by distinct quantities of nitrogen incorporated into microbial mass. Low nutrient availability feeds (sugarcane bagasse and rice hulls) promoted the lowest values of incorporated nitrogen. Nitrogen incorporation showed positive relationship (r=0.56; P=0.06) with the rate of degradation and negative relationship (r=-0.59; P<0.05) with fiber content of feeds. The results highlight that easier fermentable feeds (higher rates of degradation) and/or with lower fiber contents promote a more efficient microbial growth and better performance for the host animal. (author)

  11. Rumen microbial growth estimation using in vitro radiophosphorous incorporation technique

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Ives Claudio da Silva; Machado, Mariana de Carvalho; Cabral Filho, Sergio Lucio Salomon; Gobbo, Sarita Priscila; Vitti, Dorinha Miriam Silber Schmidt; Abdalla, Adibe Luiz [Centro de Energia Nuclear na Agricultura (CENA), Piracicaba, SP (Brazil)

    2002-07-01

    Rumen microorganisms are able to transform low biological value nitrogen of feed stuff into high quality protein. To determine how much microbial protein that process forms, radiomarkers can be used. Radiophosphorous has been used to mark microbial protein, as element P is present in all rumen microorganisms (as phospholipids) and the P:N ratio of rumen biomass is quite constant. The aim of this work was to estimate microbial synthesis from feedstuff commonly used in ruminant nutrition in Brazil. Tested feeds were fresh alfalfa, raw sugarcane bagasse, rice hulls, rice meal, soybean meal, wheat meal, Tifton hay, leucaena, dehydrated citrus pulp, wet brewers' grains and cottonseed meal. {sup 32} P-labelled phosphate solution was used as marker for microbial protein. Results showed the diversity of feeds by distinct quantities of nitrogen incorporated into microbial mass. Low nutrient availability feeds (sugarcane bagasse and rice hulls) promoted the lowest values of incorporated nitrogen. Nitrogen incorporation showed positive relationship (r=0.56; P=0.06) with the rate of degradation and negative relationship (r=-0.59; P<0.05) with fiber content of feeds. The results highlight that easier fermentable feeds (higher rates of degradation) and/or with lower fiber contents promote a more efficient microbial growth and better performance for the host animal. (author)

  12. Simplified estimation technique for organic contaminant transport in ground water

    Energy Technology Data Exchange (ETDEWEB)

    Piver, W T; Lindstrom, F T

    1984-05-01

    The analytical solution for one-dimensional dispersive-advective transport of a single solute in a saturated soil accompanied by adsorption onto soil surfaces and first-order reaction rate kinetics for degradation can be used to evaluate the suitability of potential sites for burial of organic chemicals. The technique can be used to the greatest advantage with organic chemicals that are present in ground waters in small amounts. The steady-state solution provides a rapid method for chemical landfill site evaluation because it contains the important variables that describe interactions between hydrodynamics and chemical transformation. With this solution, solute concentration, at a specified distance from the landfill site, is a function of the initial concentration and two dimensionless groups. In the first group, the relative weights of advective and dispersive variables are compared, and in the second group the relative weights of hydrodynamic and degradation variables are compared. The ratio of hydrodynamic to degradation variables can be rearranged and written as (a/sub L lambda)/(q/epsilon), where a/sub L/ is the dispersivity of the soil, lambda is the reaction rate constant, q is ground water flow velocity, and epsilon is the soil porosity. When this term has a value less than 0.01, the degradation process is occurring at such a slow rate relative to the hydrodynamics that it can be neglected. Under these conditions the site is unsuitable because the chemicals are unreactive, and concentrations in ground waters will change very slowly with distance away from the landfill site.

  13. Ir-based refractory superalloys by pulse electric current sintering (PECS) process (II prealloyed powder)

    Science.gov (United States)

    Huang, C.; Yamabe-Mitarai, Y.; Harada, H.

    2002-02-01

    Five prealloyed powder samples prepared from binary Ir-based refractory superalloys were sintered at 1800 °C for 4 h by Pulse Electric Current Sintering (PECS). No metal loss was observed during sintering. The relative densities of the sintered specimens all exceeded 90% T.D. The best one was Ir-13% Hf with the density of 97.82% T.D. Phases detected in sintered samples were in accordance with the phase diagram as expected. Fractured surfaces were observed in two samples (Ir-13% Hf and Ir-15% Zr). Some improvements obtained by using prealloyed powders instead of elemental powders, which were investigated in the previous studies, were presented.

  14. A technique for the radar cross-section estimation of axisymmetric plasmoid

    International Nuclear Information System (INIS)

    Naumov, N D; Petrovskiy, V P; Sasinovskiy, Yu K; Shkatov, O Yu

    2015-01-01

    A model for the radio waves backscattering from both penetrable plasma and reflecting plasma is developed. The technique proposed is based on Huygens's principle and reduces the radar cross-section estimation to numerical integrations. (paper)

  15. A Rapid Screen Technique for Estimating Nanoparticle Transport in Porous Media

    Science.gov (United States)

    Quantifying the mobility of engineered nanoparticles in hydrologic pathways from point of release to human or ecological receptors is essential for assessing environmental exposures. Column transport experiments are a widely used technique to estimate the transport parameters of ...

  16. Use of tracer technique in estimation of methane (green house gas) from ruminant

    International Nuclear Information System (INIS)

    Singh, G.P.

    1996-01-01

    Several methods developed to estimate the methane emission by ruminant livestock like feed fermentation based technique, using radioisotope as tracer, respiration chamber, etc. have been discussed. 6 refs., 3 figs

  17. Motion estimation of tagged cardiac magnetic resonance images using variational techniques

    Czech Academy of Sciences Publication Activity Database

    Carranza-Herrezuelo, N.; Bajo, A.; Šroubek, Filip; Santamarta, C.; Cristóbal, G.; Santos, A.; Ledesma-Carbayo, M.J.

    2010-01-01

    Roč. 34, č. 6 (2010), s. 514-522 ISSN 0895-6111 Institutional research plan: CEZ:AV0Z10750506 Keywords : medical imaging processing * motion estimation * variational techniques * tagged cardiac magnetic resonance images * optical flow Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.110, year: 2010 http://library.utia.cas.cz/separaty/2010/ZOI/sroubek- motion estimation of tagged cardiac magnetic resonance images using variational techniques.pdf

  18. Simple robust technique using time delay estimation for the control and synchronization of Lorenz systems

    International Nuclear Information System (INIS)

    Jin, Maolin; Chang, Pyung Hun

    2009-01-01

    This work presents two simple and robust techniques based on time delay estimation for the respective control and synchronization of chaos systems. First, one of these techniques is applied to the control of a chaotic Lorenz system with both matched and mismatched uncertainties. The nonlinearities in the Lorenz system is cancelled by time delay estimation and desired error dynamics is inserted. Second, the other technique is applied to the synchronization of the Lue system and the Lorenz system with uncertainties. The synchronization input consists of three elements that have transparent and clear meanings. Since time delay estimation enables a very effective and efficient cancellation of disturbances and nonlinearities, the techniques turn out to be simple and robust. Numerical simulation results show fast, accurate and robust performance of the proposed techniques, thereby demonstrating their effectiveness for the control and synchronization of Lorenz systems.

  19. Development and comparision of techniques for estimating design basis flood flows for nuclear power plants

    International Nuclear Information System (INIS)

    1980-05-01

    Estimation of the design basis flood for Nuclear Power Plants can be carried out using either deterministic or stochastic techniques. Stochastic techniques, while widely used for the solution of a variety of hydrological and other problems, have not been used to date (1980) in connection with the estimation of design basis flood for NPP siting. This study compares the two techniques against one specific river site (Galt on the Grand River, Ontario). The study concludes that both techniques lead to comparable results , but that stochastic techniques have the advantage of extracting maximum information from available data and presenting the results (flood flow) as a continuous function of probability together with estimation of confidence limits. (author)

  20. Evaluation of small area crop estimation techniques using LANDSAT- and ground-derived data. [South Dakota

    Science.gov (United States)

    Amis, M. L.; Martin, M. V.; Mcguire, W. G.; Shen, S. S. (Principal Investigator)

    1982-01-01

    Studies completed in fiscal year 1981 in support of the clustering/classification and preprocessing activities of the Domestic Crops and Land Cover project. The theme throughout the study was the improvement of subanalysis district (usually county level) crop hectarage estimates, as reflected in the following three objectives: (1) to evaluate the current U.S. Department of Agriculture Statistical Reporting Service regression approach to crop area estimation as applied to the problem of obtaining subanalysis district estimates; (2) to develop and test alternative approaches to subanalysis district estimation; and (3) to develop and test preprocessing techniques for use in improving subanalysis district estimates.

  1. Two techniques for mapping and area estimation of small grains in California using Landsat digital data

    Science.gov (United States)

    Sheffner, E. J.; Hlavka, C. A.; Bauer, E. M.

    1984-01-01

    Two techniques have been developed for the mapping and area estimation of small grains in California from Landsat digital data. The two techniques are Band Ratio Thresholding, a semi-automated version of a manual procedure, and LCLS, a layered classification technique which can be fully automated and is based on established clustering and classification technology. Preliminary evaluation results indicate that the two techniques have potential for providing map products which can be incorporated into existing inventory procedures and automated alternatives to traditional inventory techniques and those which currently employ Landsat imagery.

  2. Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation.

    Science.gov (United States)

    Lee, Soojeong; Chang, Joon-Hyuk

    2017-11-01

    This paper proposes a deep learning based ensemble regression estimator with asymptotic techniques, and offers a method that can decrease uncertainty for oscillometric blood pressure (BP) measurements using the bootstrap and Monte-Carlo approach. While the former is used to estimate SBP and DBP, the latter attempts to determine confidence intervals (CIs) for SBP and DBP based on oscillometric BP measurements. This work originally employs deep belief networks (DBN)-deep neural networks (DNN) to effectively estimate BPs based on oscillometric measurements. However, there are some inherent problems with these methods. First, it is not easy to determine the best DBN-DNN estimator, and worthy information might be omitted when selecting one DBN-DNN estimator and discarding the others. Additionally, our input feature vectors, obtained from only five measurements per subject, represent a very small sample size; this is a critical weakness when using the DBN-DNN technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To address these problems, an ensemble with an asymptotic approach (based on combining the bootstrap with the DBN-DNN technique) is utilized to generate the pseudo features needed to estimate the SBP and DBP. In the first stage, the bootstrap-aggregation technique is used to create ensemble parameters. Afterward, the AdaBoost approach is employed for the second-stage SBP and DBP estimation. We then use the bootstrap and Monte-Carlo techniques in order to determine the CIs based on the target BP estimated using the DBN-DNN ensemble regression estimator with the asymptotic technique in the third stage. The proposed method can mitigate the estimation uncertainty such as large the standard deviation of error (SDE) on comparing the proposed DBN-DNN ensemble regression estimator with the DBN-DNN single regression estimator, we identify that the SDEs of the SBP and DBP are reduced by 0.58 and 0.57  mmHg, respectively. These

  3. Novel Application of Density Estimation Techniques in Muon Ionization Cooling Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Mohayai, Tanaz Angelina [IIT, Chicago; Snopok, Pavel [IIT, Chicago; Neuffer, David [Fermilab; Rogers, Chris [Rutherford

    2017-10-12

    The international Muon Ionization Cooling Experiment (MICE) aims to demonstrate muon beam ionization cooling for the first time and constitutes a key part of the R&D towards a future neutrino factory or muon collider. Beam cooling reduces the size of the phase space volume occupied by the beam. Non-parametric density estimation techniques allow very precise calculation of the muon beam phase-space density and its increase as a result of cooling. These density estimation techniques are investigated in this paper and applied in order to estimate the reduction in muon beam size in MICE under various conditions.

  4. Comparison of deterministic and stochastic techniques for estimation of design basis floods for nuclear power plants

    International Nuclear Information System (INIS)

    Solomon, S.I.; Harvey, K.D.

    1982-12-01

    The IAEA Safety Guide 50-SG-S10A recommends that design basis floods be estimated by deterministic techniques using probable maximum precipitation and a rainfall runoff model to evaluate the corresponding flood. The Guide indicates that stochastic techniques are also acceptable in which case floods of very low probability have to be estimated. The paper compares the results of applying the two techniques in two river basins at a number of locations and concludes that the uncertainty of the results of both techniques is of the same order of magnitude. However, the use of the unit hydrograph as the rainfall runoff model may lead in some cases to nonconservative estimates. A distributed non-linear rainfall runoff model leads to estimates of probable maximum flood flows which are very close to values of flows having a 10 6 - 10 7 years return interval estimated using a conservative and relatively simple stochastic technique. Recommendations on the practical application of Safety Guide 50-SG-10A are made and the extension of the stochastic technique to ungauged sites and other design parameters is discussed

  5. Comparison of deterministic and stochastic techniques for estimation of design basis floods for nuclear power plants

    International Nuclear Information System (INIS)

    Solomon, S.I.; Harvey, K.D.; Asmis, G.J.K.

    1983-01-01

    The IAEA Safety Guide 50-SG-S10A recommends that design basis floods be estimated by deterministic techniques using probable maximum precipitation and a rainfall runoff model to evaluate the corresponding flood. The Guide indicates that stochastic techniques are also acceptable in which case floods of very low probability have to be estimated. The paper compares the results of applying the two techniques in two river basins at a number of locations and concludes that the uncertainty of the results of both techniques is of the same order of magnitude. However, the use of the unit hydrograph as the rain fall runoff model may lead in some cases to non-conservative estimates. A distributed non-linear rainfall runoff model leads to estimates of probable maximum flood flows which are very close to values of flows having a 10 6 to 10 7 years return interval estimated using a conservative and relatively simple stochastic technique. Recommendations on the practical application of Safety Guide 50-SG-10A are made and the extension of the stochastic technique to ungauged sites and other design parameters is discussed

  6. The importance of the chosen technique to estimate diffuse solar radiation by means of regression

    Energy Technology Data Exchange (ETDEWEB)

    Arslan, Talha; Altyn Yavuz, Arzu [Department of Statistics. Science and Literature Faculty. Eskisehir Osmangazi University (Turkey)], email: mtarslan@ogu.edu.tr, email: aaltin@ogu.edu.tr; Acikkalp, Emin [Department of Mechanical and Manufacturing Engineering. Engineering Faculty. Bilecik University (Turkey)], email: acikkalp@gmail.com

    2011-07-01

    The Ordinary Least Squares (OLS) method is one of the most frequently used for estimation of diffuse solar radiation. The data set must provide certain assumptions for the OLS method to work. The most important is that the regression equation offered by OLS error terms must fit within the normal distribution. Utilizing an alternative robust estimator to get parameter estimations is highly effective in solving problems where there is a lack of normal distribution due to the presence of outliers or some other factor. The purpose of this study is to investigate the value of the chosen technique for the estimation of diffuse radiation. This study described alternative robust methods frequently used in applications and compared them with the OLS method. Making a comparison of the data set analysis of the OLS and that of the M Regression (Huber, Andrews and Tukey) techniques, it was study found that robust regression techniques are preferable to OLS because of the smoother explanation values.

  7. A survey on OFDM channel estimation techniques based on denoising strategies

    Directory of Open Access Journals (Sweden)

    Pallaviram Sure

    2017-04-01

    Full Text Available Channel estimation forms the heart of any orthogonal frequency division multiplexing (OFDM based wireless communication receiver. Frequency domain pilot aided channel estimation techniques are either least squares (LS based or minimum mean square error (MMSE based. LS based techniques are computationally less complex. Unlike MMSE ones, they do not require a priori knowledge of channel statistics (KCS. However, the mean square error (MSE performance of the channel estimator incorporating MMSE based techniques is better compared to that obtained with the incorporation of LS based techniques. To enhance the MSE performance using LS based techniques, a variety of denoising strategies have been developed in the literature, which are applied on the LS estimated channel impulse response (CIR. The advantage of denoising threshold based LS techniques is that, they do not require KCS but still render near optimal MMSE performance similar to MMSE based techniques. In this paper, a detailed survey on various existing denoising strategies, with a comparative discussion of these strategies is presented.

  8. An IR-Based Approach Utilizing Query Expansion for Plagiarism Detection in MEDLINE.

    Science.gov (United States)

    Nawab, Rao Muhammad Adeel; Stevenson, Mark; Clough, Paul

    2017-01-01

    The identification of duplicated and plagiarized passages of text has become an increasingly active area of research. In this paper, we investigate methods for plagiarism detection that aim to identify potential sources of plagiarism from MEDLINE, particularly when the original text has been modified through the replacement of words or phrases. A scalable approach based on Information Retrieval is used to perform candidate document selection-the identification of a subset of potential source documents given a suspicious text-from MEDLINE. Query expansion is performed using the ULMS Metathesaurus to deal with situations in which original documents are obfuscated. Various approaches to Word Sense Disambiguation are investigated to deal with cases where there are multiple Concept Unique Identifiers (CUIs) for a given term. Results using the proposed IR-based approach outperform a state-of-the-art baseline based on Kullback-Leibler Distance.

  9. Direction of Arrival Estimation Accuracy Enhancement via Using Displacement Invariance Technique

    Directory of Open Access Journals (Sweden)

    Youssef Fayad

    2015-01-01

    Full Text Available A new algorithm for improving Direction of Arrival Estimation (DOAE accuracy has been carried out. Two contributions are introduced. First, Doppler frequency shift that resulted from the target movement is estimated using the displacement invariance technique (DIT. Second, the effect of Doppler frequency is modeled and incorporated into ESPRIT algorithm in order to increase the estimation accuracy. It is worth mentioning that the subspace approach has been employed into ESPRIT and DIT methods to reduce the computational complexity and the model’s nonlinearity effect. The DOAE accuracy has been verified by closed-form Cramér-Rao bound (CRB. The simulation results of the proposed algorithm are better than those of the previous estimation techniques leading to the estimator performance enhancement.

  10. Estimation of the Coefficient of Restitution of Rocking Systems by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Demosthenous, Milton; Manos, George C.

    1994-01-01

    The aim of this paper is to investigate the possibility of estimating an average damping parameter for a rocking system due to impact, the so-called coefficient of restitution, from the random response, i.e. when the loads are random and unknown, and the response is measured. The objective...... is to obtain an estimate of the free rocking response from the measured random response using the Random Decrement (RDD) Technique, and then estimate the coefficient of restitution from this free response estimate. In the paper this approach is investigated by simulating the response of a single degree...

  11. Third molar development: evaluation of nine tooth development registration techniques for age estimations.

    Science.gov (United States)

    Thevissen, Patrick W; Fieuws, Steffen; Willems, Guy

    2013-03-01

    Multiple third molar development registration techniques exist. Therefore the aim of this study was to detect which third molar development registration technique was most promising to use as a tool for subadult age estimation. On a collection of 1199 panoramic radiographs the development of all present third molars was registered following nine different registration techniques [Gleiser, Hunt (GH); Haavikko (HV); Demirjian (DM); Raungpaka (RA); Gustafson, Koch (GK); Harris, Nortje (HN); Kullman (KU); Moorrees (MO); Cameriere (CA)]. Regression models with age as response and the third molar registration as predictor were developed for each registration technique separately. The MO technique disclosed highest R(2) (F 51%, M 45%) and lowest root mean squared error (F 3.42 years; M 3.67 years) values, but differences with other techniques were small in magnitude. The amount of stages utilized in the explored staging techniques slightly influenced the age predictions. © 2013 American Academy of Forensic Sciences.

  12. Parameter estimation techniques and uncertainty in ground water flow model predictions

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Davis, P.A.

    1990-01-01

    Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs

  13. A new slit lamp-based technique for anterior chamber angle estimation.

    Science.gov (United States)

    Gispets, Joan; Cardona, Genís; Tomàs, Núria; Fusté, Cèlia; Binns, Alison; Fortes, Miguel A

    2014-06-01

    To design and test a new noninvasive method for anterior chamber angle (ACA) estimation based on the slit lamp that is accessible to all eye-care professionals. A new technique (slit lamp anterior chamber estimation [SLACE]) that aims to overcome some of the limitations of the van Herick procedure was designed. The technique, which only requires a slit lamp, was applied to estimate the ACA of 50 participants (100 eyes) using two different slit lamp models, and results were compared with gonioscopy as the clinical standard. The Spearman nonparametric correlation between ACA values as determined by gonioscopy and SLACE were 0.81 (p gonioscopy (Spaeth classification). The SLACE technique, when compared with gonioscopy, displayed good accuracy in the detection of narrow angles, and it may be useful for eye-care clinicians without access to expensive alternative equipment or those who cannot perform gonioscopy because of legal constraints regarding the use of diagnostic drugs.

  14. Calculational techniques for estimating population doses from radioactivity in natural gas from nuclearly stimulated wells

    International Nuclear Information System (INIS)

    Barton, C.J.; Moore, R.E.; Rohwer, P.S.; Kaye, S.V.

    1975-01-01

    Techniques for estimating radiation doses from exposure to combustion products of natural gas obtained from wells created by use of nuclear explosives were first developed in the Gasbuggy Project. These techniques were refined and extended by development of a number of computer codes in studies related to the Rulison Project, the second in the series of joint government-industry efforts to demonstrate the feasibility of increasing natural gas production from low-permeability rock formations by use of nuclear explosives. These techniques are described and dose estimates that illustrate their use are given. These dose estimation studies have been primarily theoretical, but we have tried to make our hypothetical exposure conditions correspond as closely as possible with conditions that could exist if nuclearly stimulated natural gas is used commercially. (author)

  15. A concise account of techniques available for shipboard sea state estimation

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2017-01-01

    This article gives a review of techniques applied to make sea state estimation on the basis of measured responses on a ship. The general concept of the procedures is similar to that of a classical wave buoy, which exploits a linear assumption between waves and the associated motions. In the frequ......This article gives a review of techniques applied to make sea state estimation on the basis of measured responses on a ship. The general concept of the procedures is similar to that of a classical wave buoy, which exploits a linear assumption between waves and the associated motions...

  16. Water temperature forecasting and estimation using fourier series and communication theory techniques

    International Nuclear Information System (INIS)

    Long, L.L.

    1976-01-01

    Fourier series and statistical communication theory techniques are utilized in the estimation of river water temperature increases caused by external thermal inputs. An example estimate assuming a constant thermal input is demonstrated. A regression fit of the Fourier series approximation of temperature is then used to forecast daily average water temperatures. Also, a 60-day prediction of daily average water temperature is made with the aid of the Fourier regression fit by using significant Fourier components

  17. Cost Engineering Techniques and Their Applicability for Cost Estimation of Organic Rankine Cycle Systems

    Directory of Open Access Journals (Sweden)

    Sanne Lemmens

    2016-06-01

    Full Text Available The potential of organic Rankine cycle (ORC systems is acknowledged by both considerable research and development efforts and an increasing number of applications. Most research aims at improving ORC systems through technical performance optimization of various cycle architectures and working fluids. The assessment and optimization of technical feasibility is at the core of ORC development. Nonetheless, economic feasibility is often decisive when it comes down to considering practical instalments, and therefore an increasing number of publications include an estimate of the costs of the designed ORC system. Various methods are used to estimate ORC costs but the resulting values are rarely discussed with respect to accuracy and validity. The aim of this paper is to provide insight into the methods used to estimate these costs and open the discussion about the interpretation of these results. A review of cost engineering practices shows there has been a long tradition of industrial cost estimation. Several techniques have been developed, but the expected accuracy range of the best techniques used in research varies between 10% and 30%. The quality of the estimates could be improved by establishing up-to-date correlations for the ORC industry in particular. Secondly, the rapidly growing ORC cost literature is briefly reviewed. A graph summarizing the estimated ORC investment costs displays a pattern of decreasing costs for increasing power output. Knowledge on the actual costs of real ORC modules and projects remains scarce. Finally, the investment costs of a known heat recovery ORC system are discussed and the methodologies and accuracies of several approaches are demonstrated using this case as benchmark. The best results are obtained with factorial estimation techniques such as the module costing technique, but the accuracies may diverge by up to +30%. Development of correlations and multiplication factors for ORC technology in particular is

  18. Switching EKF technique for rotor and stator resistance estimation in speed sensorless control of IMs

    International Nuclear Information System (INIS)

    Barut, Murat; Bogosyan, Seta; Gokasan, Metin

    2007-01-01

    High performance speed sensorless control of induction motors (IMs) calls for estimation and control schemes that offer solutions to parameter uncertainties as well as to difficulties involved with accurate flux/velocity estimation at very low and zero speed. In this study, a new EKF based estimation algorithm is proposed for the solution of both problems and is applied in combination with speed sensorless direct vector control (DVC). The technique is based on the consecutive execution of two EKF algorithms, by switching from one algorithm to another at every n sampling periods. The number of sampling periods, n, is determined based on the desired system performance. The switching EKF approach, thus applied, provides an accurate estimation of an increased number of parameters than would be possible with a single EKF algorithm. The simultaneous and accurate estimation of rotor, R r ' and stator, R s resistances, both in the transient and steady state, is an important challenge in speed sensorless IM control and reported studies achieving satisfactory results are few, if any. With the proposed technique in this study, the sensorless estimation of R r ' and R s is achieved in transient and steady state and in both high and low speed operation while also estimating the unknown load torque, velocity, flux and current components. The performance demonstrated by the simulation results at zero speed, as well as at low and high speed operation is very promising when compared with individual EKF algorithms performing either R r ' or R s estimation or with the few other approaches taken in past studies, which require either signal injection and/or a change of algorithms based on the speed range. The results also motivate utilization of the technique for multiple parameter estimation in a variety of control methods

  19. Self-consistent technique for estimating the dynamic yield strength of a shock-loaded material

    International Nuclear Information System (INIS)

    Asay, J.R.; Lipkin, J.

    1978-01-01

    A technique is described for estimating the dynamic yield stress in a shocked material. This method employs reloading and unloading data from a shocked state along with a general assumption of yield and hardening behavior to estimate the yield stress in the precompressed state. No other data are necessary for this evaluation, and, therefore, the method has general applicability at high shock pressures and in materials undergoing phase transitions. In some special cases, it is also possible to estimate the complete state of stress in a shocked state. Using this method, the dynamic yield strength of aluminum at 2.06 GPa has been estimated to be 0.26 GPa. This value agrees reasonably well with previous estimates

  20. Adaptive finite element techniques for the Maxwell equations using implicit a posteriori error estimates

    NARCIS (Netherlands)

    Harutyunyan, D.; Izsak, F.; van der Vegt, Jacobus J.W.; Bochev, Mikhail A.

    For the adaptive solution of the Maxwell equations on three-dimensional domains with N´ed´elec edge finite element methods, we consider an implicit a posteriori error estimation technique. On each element of the tessellation an equation for the error is formulated and solved with a properly chosen

  1. The Optical Fractionator Technique to Estimate Cell Numbers in a Rat Model of Electroconvulsive Therapy

    DEFF Research Database (Denmark)

    Olesen, Mikkel Vestergaard; Needham, Esther Kjær; Pakkenberg, Bente

    2017-01-01

    are too high to count manually, and stereology is now the technique of choice whenever estimates of three-dimensional quantities need to be extracted from measurements on two-dimensional sections. All stereological methods are in principle unbiased; however, they rely on proper knowledge about...

  2. A passive technique using SSNTDs for Estimation of thorium to uranium ratios in rocks

    International Nuclear Information System (INIS)

    Kenawy, M.A.; Sayyah, T.A.; Said, A.F.; Hafez, A.F.

    2005-01-01

    A passive technique using plastic nuclear track detectors (CR-39 and LR-115) is presented to estimate Th/U ratios and consequently the thorium and uranium content in granites taken from uranium exploration mines in Egyptian desert. The registration sensitivities of both CR-39 and LR-115 detector for close contact alpha-radiography uranium and thorium concentrations in ppm were computed

  3. Estimation of Anti-HIV Activity of HEPT Analogues Using MLR, ANN, and SVM Techniques

    Directory of Open Access Journals (Sweden)

    Basheerulla Shaik

    2013-01-01

    value than those of MLR and SVM techniques. Rm2= metrics and ridge regression analysis indicated that the proposed four-variable model MATS5e, RDF080u, T(O⋯O, and MATS5m as correlating descriptors is the best for estimating the anti-HIV activity (log 1/C present set of compounds.

  4. A review of sex estimation techniques during examination of skeletal remains in forensic anthropology casework.

    Science.gov (United States)

    Krishan, Kewal; Chatterjee, Preetika M; Kanchan, Tanuj; Kaur, Sandeep; Baryah, Neha; Singh, R K

    2016-04-01

    Sex estimation is considered as one of the essential parameters in forensic anthropology casework, and requires foremost consideration in the examination of skeletal remains. Forensic anthropologists frequently employ morphologic and metric methods for sex estimation of human remains. These methods are still very imperative in identification process in spite of the advent and accomplishment of molecular techniques. A constant boost in the use of imaging techniques in forensic anthropology research has facilitated to derive as well as revise the available population data. These methods however, are less reliable owing to high variance and indistinct landmark details. The present review discusses the reliability and reproducibility of various analytical approaches; morphological, metric, molecular and radiographic methods in sex estimation of skeletal remains. Numerous studies have shown a higher reliability and reproducibility of measurements taken directly on the bones and hence, such direct methods of sex estimation are considered to be more reliable than the other methods. Geometric morphometric (GM) method and Diagnose Sexuelle Probabiliste (DSP) method are emerging as valid methods and widely used techniques in forensic anthropology in terms of accuracy and reliability. Besides, the newer 3D methods are shown to exhibit specific sexual dimorphism patterns not readily revealed by traditional methods. Development of newer and better methodologies for sex estimation as well as re-evaluation of the existing ones will continue in the endeavour of forensic researchers for more accurate results. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  6. Ultra-small time-delay estimation via a weak measurement technique with post-selection

    International Nuclear Information System (INIS)

    Fang, Chen; Huang, Jing-Zheng; Yu, Yang; Li, Qinzheng; Zeng, Guihua

    2016-01-01

    Weak measurement is a novel technique for parameter estimation with higher precision. In this paper we develop a general theory for the parameter estimation based on a weak measurement technique with arbitrary post-selection. The weak-value amplification model and the joint weak measurement model are two special cases in our theory. Applying the developed theory, time-delay estimation is investigated in both theory and experiments. The experimental results show that when the time delay is ultra-small, the joint weak measurement scheme outperforms the weak-value amplification scheme, and is robust against not only misalignment errors but also the wavelength dependence of the optical components. These results are consistent with theoretical predictions that have not been previously verified by any experiment. (paper)

  7. Fast Spectral Velocity Estimation Using Adaptive Techniques: In-Vivo Results

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Udesen, Jesper

    2007-01-01

    Adaptive spectral estimation techniques are known to provide good spectral resolution and contrast even when the observation window(OW) is very sbort. In this paper two adaptive techniques are tested and compared to the averaged perlodogram (Welch) for blood velocity estimation. The Blood Power...... the blood process over slow-time and averaging over depth to find the power spectral density estimate. In this paper, the two adaptive methods are explained, and performance Is assessed in controlled steady How experiments and in-vivo measurements. The three methods were tested on a circulating How rig...... with a blood mimicking fluid flowing in the tube. The scanning section is submerged in water to allow ultrasound data acquisition. Data was recorded using a BK8804 linear array transducer and the RASMUS ultrasound scanner. The controlled experiments showed that the OW could be significantly reduced when...

  8. Precision of four otolith techniques for estimating age of white perch from a thermally altered reservoir

    Science.gov (United States)

    Snow, Richard A.; Porta, Michael J.; Long, James M.

    2018-01-01

    The White Perch Morone americana is an invasive species in many Midwestern states and is widely distributed in reservoir systems, yet little is known about the species' age structure and population dynamics. White Perch were first observed in Sooner Reservoir, a thermally altered cooling reservoir in Oklahoma, by the Oklahoma Department of Wildlife Conservation in 2006. It is unknown how thermally altered systems like Sooner Reservoir may affect the precision of White Perch age estimates. Previous studies have found that age structures from Largemouth Bass Micropterus salmoides and Bluegills Lepomis macrochirus from thermally altered reservoirs had false annuli, which increased error when estimating ages. Our objective was to quantify the precision of White Perch age estimates using four sagittal otolith preparation techniques (whole, broken, browned, and stained). Because Sooner Reservoir is thermally altered, we also wanted to identify the best month to collect a White Perch age sample based on aging precision. Ages of 569 White Perch (20–308 mm TL) were estimated using the four techniques. Age estimates from broken, stained, and browned otoliths ranged from 0 to 8 years; whole‐view otolith age estimates ranged from 0 to 7 years. The lowest mean coefficient of variation (CV) was obtained using broken otoliths, whereas the highest CV was observed using browned otoliths. July was the most precise month (lowest mean CV) for estimating age of White Perch, whereas April was the least precise month (highest mean CV). These results underscore the importance of knowing the best method to prepare otoliths for achieving the most precise age estimates and the best time of year to obtain those samples, as these factors may affect other estimates of population dynamics.

  9. Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques

    Directory of Open Access Journals (Sweden)

    Giancarmine Fasano

    2013-09-01

    Full Text Available An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.

  10. Forensic age estimation based on development of third molars: a staging technique for magnetic resonance imaging.

    Science.gov (United States)

    De Tobel, J; Phlypo, I; Fieuws, S; Politis, C; Verstraete, K L; Thevissen, P W

    2017-12-01

    The development of third molars can be evaluated with medical imaging to estimate age in subadults. The appearance of third molars on magnetic resonance imaging (MRI) differs greatly from that on radiographs. Therefore a specific staging technique is necessary to classify third molar development on MRI and to apply it for age estimation. To develop a specific staging technique to register third molar development on MRI and to evaluate its performance for age estimation in subadults. Using 3T MRI in three planes, all third molars were evaluated in 309 healthy Caucasian participants from 14 to 26 years old. According to the appearance of the developing third molars on MRI, descriptive criteria and schematic representations were established to define a specific staging technique. Two observers, with different levels of experience, staged all third molars independently with the developed technique. Intra- and inter-observer agreement were calculated. The data were imported in a Bayesian model for age estimation as described by Fieuws et al. (2016). This approach adequately handles correlation between age indicators and missing age indicators. It was used to calculate a point estimate and a prediction interval of the estimated age. Observed age minus predicted age was calculated, reflecting the error of the estimate. One-hundred and sixty-six third molars were agenetic. Five percent (51/1096) of upper third molars and 7% (70/1044) of lower third molars were not assessable. Kappa for inter-observer agreement ranged from 0.76 to 0.80. For intra-observer agreement kappa ranged from 0.80 to 0.89. However, two stage differences between observers or between staging sessions occurred in up to 2.2% (20/899) of assessments, probably due to a learning effect. Using the Bayesian model for age estimation, a mean absolute error of 2.0 years in females and 1.7 years in males was obtained. Root mean squared error equalled 2.38 years and 2.06 years respectively. The performance to

  11. Application of the control variate technique to estimation of total sensitivity indices

    International Nuclear Information System (INIS)

    Kucherenko, S.; Delpuech, B.; Iooss, B.; Tarantola, S.

    2015-01-01

    Global sensitivity analysis is widely used in many areas of science, biology, sociology and policy planning. The variance-based methods also known as Sobol' sensitivity indices has become the method of choice among practitioners due to its efficiency and ease of interpretation. For complex practical problems, estimation of Sobol' sensitivity indices generally requires a large number of function evaluations to achieve reasonable convergence. To improve the efficiency of the Monte Carlo estimates for the Sobol' total sensitivity indices we apply the control variate reduction technique and develop a new formula for evaluation of total sensitivity indices. Presented results using well known test functions show the efficiency of the developed technique. - Highlights: • We analyse the efficiency of the Monte Carlo estimates of Sobol' sensitivity indices. • The control variate technique is applied for estimation of total sensitivity indices. • We develop a new formula for evaluation of Sobol' total sensitivity indices. • We present test results demonstrating the high efficiency of the developed formula

  12. Using Intelligent Techniques in Construction Project Cost Estimation: 10-Year Survey

    Directory of Open Access Journals (Sweden)

    Abdelrahman Osman Elfaki

    2014-01-01

    Full Text Available Cost estimation is the most important preliminary process in any construction project. Therefore, construction cost estimation has the lion’s share of the research effort in construction management. In this paper, we have analysed and studied proposals for construction cost estimation for the last 10 years. To implement this survey, we have proposed and applied a methodology that consists of two parts. The first part concerns data collection, for which we have chosen special journals as sources for the surveyed proposals. The second part concerns the analysis of the proposals. To analyse each proposal, the following four questions have been set. Which intelligent technique is used? How have data been collected? How are the results validated? And which construction cost estimation factors have been used? From the results of this survey, two main contributions have been produced. The first contribution is the defining of the research gap in this area, which has not been fully covered by previous proposals of construction cost estimation. The second contribution of this survey is the proposal and highlighting of future directions for forthcoming proposals, aimed ultimately at finding the optimal construction cost estimation. Moreover, we consider the second part of our methodology as one of our contributions in this paper. This methodology has been proposed as a standard benchmark for construction cost estimation proposals.

  13. Forest parameter estimation using polarimetric SAR interferometry techniques at low frequencies

    International Nuclear Information System (INIS)

    Lee, Seung-Kuk

    2013-01-01

    Polarimetric Synthetic Aperture Radar Interferometry (Pol-InSAR) is an active radar remote sensing technique based on the coherent combination of both polarimetric and interferometric observables. The Pol-InSAR technique provided a step forward in quantitative forest parameter estimation. In the last decade, airborne SAR experiments evaluated the potential of Pol-InSAR techniques to estimate forest parameters (e.g., the forest height and biomass) with high accuracy over various local forest test sites. This dissertation addresses the actual status, potentials and limitations of Pol-InSAR inversion techniques for 3-D forest parameter estimations on a global scale using lower frequencies such as L- and P-band. The multi-baseline Pol-InSAR inversion technique is applied to optimize the performance with respect to the actual level of the vertical wave number and to mitigate the impact of temporal decorrelation on the Pol-InSAR forest parameter inversion. Temporal decorrelation is a critical issue for successful Pol-InSAR inversion in the case of repeat-pass Pol-InSAR data, as provided by conventional satellites or airborne SAR systems. Despite the limiting impact of temporal decorrelation in Pol-InSAR inversion, it remains a poorly understood factor in forest height inversion. Therefore, the main goal of this dissertation is to provide a quantitative estimation of the temporal decorrelation effects by using multi-baseline Pol-InSAR data. A new approach to quantify the different temporal decorrelation components is proposed and discussed. Temporal decorrelation coefficients are estimated for temporal baselines ranging from 10 minutes to 54 days and are converted to height inversion errors. In addition, the potential of Pol-InSAR forest parameter estimation techniques is addressed and projected onto future spaceborne system configurations and mission scenarios (Tandem-L and BIOMASS satellite missions at L- and P-band). The impact of the system parameters (e.g., bandwidth

  14. IR-BASED SATELLITE PRODUCTS FOR THE MONITORING OF ATMOSPHERIC WATER VAPOR OVER THE BLACK SEA

    Directory of Open Access Journals (Sweden)

    VELEA LILIANA

    2016-03-01

    Full Text Available The amount of precipitable water (TPW in the atmospheric column is one of the important information used weather forecasting. Some of the studies involving the use of TPW relate to issues like lightning warning system in airports, tornadic events, data assimilation in numerical weather prediction models for short-range forecast, TPW associated with intense rain episodes. Most of the available studies on TPW focus on properties and products at global scale, with the drawback that regional characteristics – due to local processes acting as modulating factors - may be lost. For the Black Sea area, studies on the climatological features of atmospheric moisture are available from sparse or not readily available observational databases or from global reanalysis. These studies show that, although a basin of relatively small dimensions, the Black Sea presents features that may significantly impact on the atmospheric circulation and its general characteristics. Satellite observations provide new opportunities for extending the knowledge on this area and for monitoring atmospheric properties at various scales. In particular, observations in infrared (IR spectrum are suitable for studies on small-scale basins, due to the finer spatial sampling and reliable information in the coastal areas. As a first step toward the characterization of atmospheric moisture over the Black Sea from satellite-based information, we investigate three datasets of IR-based products which contain information on the total amount of moisture and on its vertical distribution, available in the area of interest. The aim is to provide a comparison of these data with regard to main climatological features of moisture in this area and to highlight particular strengths and limits of each of them, which may be helpful in the choice of the most suitable dataset for a certain application.

  15. A direct-measurement technique for estimating discharge-chamber lifetime. [for ion thrusters

    Science.gov (United States)

    Beattie, J. R.; Garvin, H. L.

    1982-01-01

    The use of short-term measurement techniques for predicting the wearout of ion thrusters resulting from sputter-erosion damage is investigated. The laminar-thin-film technique is found to provide high precision erosion-rate data, although the erosion rates are generally substantially higher than those found during long-term erosion tests, so that the results must be interpreted in a relative sense. A technique for obtaining absolute measurements is developed using a masked-substrate arrangement. This new technique provides a means for estimating the lifetimes of critical discharge-chamber components based on direct measurements of sputter-erosion depths obtained during short-duration (approximately 1 hr) tests. Results obtained using the direct-measurement technique are shown to agree with sputter-erosion depths calculated for the plasma conditions of the test. The direct-measurement approach is found to be applicable to both mercury and argon discharge-plasma environments and will be useful for estimating the lifetimes of inert gas and extended performance mercury ion thrusters currently under development.

  16. Recursive estimation techniques for detection of small objects in infrared image data

    Science.gov (United States)

    Zeidler, J. R.; Soni, T.; Ku, W. H.

    1992-04-01

    This paper describes a recursive detection scheme for point targets in infrared (IR) images. Estimation of the background noise is done using a weighted autocorrelation matrix update method and the detection statistic is calculated using a recursive technique. A weighting factor allows the algorithm to have finite memory and deal with nonstationary noise characteristics. The detection statistic is created by using a matched filter for colored noise, using the estimated noise autocorrelation matrix. The relationship between the weighting factor, the nonstationarity of the noise and the probability of detection is described. Some results on one- and two-dimensional infrared images are presented.

  17. Effective wind speed estimation: Comparison between Kalman Filter and Takagi-Sugeno observer techniques.

    Science.gov (United States)

    Gauterin, Eckhard; Kammerer, Philipp; Kühn, Martin; Schulte, Horst

    2016-05-01

    Advanced model-based control of wind turbines requires knowledge of the states and the wind speed. This paper benchmarks a nonlinear Takagi-Sugeno observer for wind speed estimation with enhanced Kalman Filter techniques: The performance and robustness towards model-structure uncertainties of the Takagi-Sugeno observer, a Linear, Extended and Unscented Kalman Filter are assessed. Hence the Takagi-Sugeno observer and enhanced Kalman Filter techniques are compared based on reduced-order models of a reference wind turbine with different modelling details. The objective is the systematic comparison with different design assumptions and requirements and the numerical evaluation of the reconstruction quality of the wind speed. Exemplified by a feedforward loop employing the reconstructed wind speed, the benefit of wind speed estimation within wind turbine control is illustrated. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Efficient Bayesian Compressed Sensing-based Channel Estimation Techniques for Massive MIMO-OFDM Systems

    OpenAIRE

    Al-Salihi, Hayder Qahtan Kshash; Nakhai, Mohammad Reza

    2017-01-01

    Efficient and highly accurate channel state information (CSI) at the base station (BS) is essential to achieve the potential benefits of massive multiple input multiple output (MIMO) systems. However, the achievable accuracy that is attainable is limited in practice due to the problem of pilot contamination. It has recently been shown that compressed sensing (CS) techniques can address the pilot contamination problem. However, CS-based channel estimation requires prior knowledge of channel sp...

  19. Coarse-grain bandwidth estimation techniques for large-scale network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, E.

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  20. Artificial intelligence techniques applied to hourly global irradiance estimation from satellite-derived cloud index

    Energy Technology Data Exchange (ETDEWEB)

    Zarzalejo, L.F.; Ramirez, L.; Polo, J. [DER-CIEMAT, Madrid (Spain). Renewable Energy Dept.

    2005-07-01

    Artificial intelligence techniques, such as fuzzy logic and neural networks, have been used for estimating hourly global radiation from satellite images. The models have been fitted to measured global irradiance data from 15 Spanish terrestrial stations. Both satellite imaging data and terrestrial information from the years 1994, 1995 and 1996 were used. The results of these artificial intelligence models were compared to a multivariate regression based upon Heliosat I model. A general better behaviour was observed for the artificial intelligence models. (author)

  1. Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther

    2013-01-01

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  2. Artificial intelligence techniques applied to hourly global irradiance estimation from satellite-derived cloud index

    International Nuclear Information System (INIS)

    Zarzalejo, Luis F.; Ramirez, Lourdes; Polo, Jesus

    2005-01-01

    Artificial intelligence techniques, such as fuzzy logic and neural networks, have been used for estimating hourly global radiation from satellite images. The models have been fitted to measured global irradiance data from 15 Spanish terrestrial stations. Both satellite imaging data and terrestrial information from the years 1994, 1995 and 1996 were used. The results of these artificial intelligence models were compared to a multivariate regression based upon Heliosat I model. A general better behaviour was observed for the artificial intelligence models

  3. PERFORMANCE ANALYSIS OF PILOT BASED CHANNEL ESTIMATION TECHNIQUES IN MB OFDM SYSTEMS

    Directory of Open Access Journals (Sweden)

    M. Madheswaran

    2011-12-01

    Full Text Available Ultra wideband (UWB communication is mainly used for short range of communication in wireless personal area networks. Orthogonal Frequency Division Multiplexing (OFDM is being used as a key physical layer technology for Fourth Generation (4G wireless communication. OFDM based communication gives high spectral efficiency and mitigates Inter-symbol Interference (ISI in a wireless medium. In this paper the IEEE 802.15.3a based Multiband OFDM (MB OFDM system is considered. The pilot based channel estimation techniques are considered to analyze the performance of MB OFDM systems over Liner Time Invariant (LTI Channel models. In this paper, pilot based Least Square (LS and Least Minimum Mean Square Error (LMMSE channel estimation technique has been considered for UWB OFDM system. In the proposed method, the estimated Channel Impulse Responses (CIRs are filtered in the time domain for the consideration of the channel delay spread. Also the performance of proposed system has been analyzed for different modulation techniques for various pilot density patterns.

  4. Data-driven techniques to estimate parameters in a rate-dependent ferromagnetic hysteresis model

    International Nuclear Information System (INIS)

    Hu Zhengzheng; Smith, Ralph C.; Ernstberger, Jon M.

    2012-01-01

    The quantification of rate-dependent ferromagnetic hysteresis is important in a range of applications including high speed milling using Terfenol-D actuators. There exist a variety of frameworks for characterizing rate-dependent hysteresis including the magnetic model in Ref. , the homogenized energy framework, Preisach formulations that accommodate after-effects, and Prandtl-Ishlinskii models. A critical issue when using any of these models to characterize physical devices concerns the efficient estimation of model parameters through least squares data fits. A crux of this issue is the determination of initial parameter estimates based on easily measured attributes of the data. In this paper, we present data-driven techniques to efficiently and robustly estimate parameters in the homogenized energy model. This framework was chosen due to its physical basis and its applicability to ferroelectric, ferromagnetic and ferroelastic materials.

  5. Three different applications of genetic algorithm (GA) search techniques on oil demand estimation

    International Nuclear Information System (INIS)

    Canyurt, Olcay Ersel; Oztuerk, Harun Kemal

    2006-01-01

    This present study develops three scenarios to analyze oil consumption and make future projections based on the Genetic algorithm (GA) notion, and examines the effect of the design parameters on the oil utilization values. The models developed in the non-linear form are applied to the oil demand of Turkey. The GA Oil Demand Estimation Model (GAODEM) is developed to estimate the future oil demand values based on Gross National Product (GNP), population, import, export, oil production, oil import and car, truck and bus sales figures. Among these models, the GA-PGOiTI model, which uses population, GNP, oil import, truck sales and import as design parameters/indicators, was found to provide the best fit solution with the observed data. It may be concluded that the proposed models can be used as alternative solution and estimation techniques for the future oil utilization values of any country

  6. Dynamic state estimation techniques for large-scale electric power systems

    International Nuclear Information System (INIS)

    Rousseaux, P.; Pavella, M.

    1991-01-01

    This paper presents the use of dynamic type state estimators for energy management in electric power systems. Various dynamic type estimators have been developed, but have never been implemented. This is primarily because of dimensionality problems posed by the conjunction of an extended Kalman filter with a large scale power system. This paper precisely focuses on how to circumvent the high dimensionality, especially prohibitive in the filtering step, by using a decomposition-aggregation hierarchical scheme; to appropriately model the power system dynamics, the authors introduce new state variables in the prediction step and rely on a load forecasting method. The combination of these two techniques succeeds in solving the overall dynamic state estimation problem not only in a tractable and realistic way, but also in compliance with real-time computational requirements. Further improvements are also suggested, bound to the specifics of the high voltage electric transmission systems

  7. Advances in estimation methods of vegetation water content based on optical remote sensing techniques

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Quantitative estimation of vegetation water content(VWC) using optical remote sensing techniques is helpful in forest fire as-sessment,agricultural drought monitoring and crop yield estimation.This paper reviews the research advances of VWC retrieval using spectral reflectance,spectral water index and radiative transfer model(RTM) methods.It also evaluates the reli-ability of VWC estimation using spectral water index from the observation data and the RTM.Focusing on two main definitions of VWC-the fuel moisture content(FMC) and the equivalent water thickness(EWT),the retrieval accuracies of FMC and EWT using vegetation water indices are analyzed.Moreover,the measured information and the dataset are used to estimate VWC,the results show there are significant correlations among three kinds of vegetation water indices(i.e.,WSI,NDⅡ,NDWI1640,WI/NDVI) and canopy FMC of winter wheat(n=45).Finally,the future development directions of VWC detection based on optical remote sensing techniques are also summarized.

  8. Innovative Techniques for Estimating Illegal Activities in a Human-Wildlife-Management Conflict

    Science.gov (United States)

    Cross, Paul; St. John, Freya A. V.; Khan, Saira; Petroczi, Andrea

    2013-01-01

    Effective management of biological resources is contingent upon stakeholder compliance with rules. With respect to disease management, partial compliance can undermine attempts to control diseases within human and wildlife populations. Estimating non-compliance is notoriously problematic as rule-breakers may be disinclined to admit to transgressions. However, reliable estimates of rule-breaking are critical to policy design. The European badger (Meles meles) is considered an important vector in the transmission and maintenance of bovine tuberculosis (bTB) in cattle herds. Land managers in high bTB prevalence areas of the UK can cull badgers under license. However, badgers are also known to be killed illegally. The extent of illegal badger killing is currently unknown. Herein we report on the application of three innovative techniques (Randomized Response Technique (RRT); projective questioning (PQ); brief implicit association test (BIAT)) for investigating illegal badger killing by livestock farmers across Wales. RRT estimated that 10.4% of farmers killed badgers in the 12 months preceding the study. Projective questioning responses and implicit associations relate to farmers' badger killing behavior reported via RRT. Studies evaluating the efficacy of mammal vector culling and vaccination programs should incorporate estimates of non-compliance. Mitigating the conflict concerning badgers as a vector of bTB requires cross-disciplinary scientific research, departure from deep-rooted positions, and the political will to implement evidence-based management. PMID:23341973

  9. Innovative techniques for estimating illegal activities in a human-wildlife-management conflict.

    Directory of Open Access Journals (Sweden)

    Paul Cross

    Full Text Available Effective management of biological resources is contingent upon stakeholder compliance with rules. With respect to disease management, partial compliance can undermine attempts to control diseases within human and wildlife populations. Estimating non-compliance is notoriously problematic as rule-breakers may be disinclined to admit to transgressions. However, reliable estimates of rule-breaking are critical to policy design. The European badger (Meles meles is considered an important vector in the transmission and maintenance of bovine tuberculosis (bTB in cattle herds. Land managers in high bTB prevalence areas of the UK can cull badgers under license. However, badgers are also known to be killed illegally. The extent of illegal badger killing is currently unknown. Herein we report on the application of three innovative techniques (Randomized Response Technique (RRT; projective questioning (PQ; brief implicit association test (BIAT for investigating illegal badger killing by livestock farmers across Wales. RRT estimated that 10.4% of farmers killed badgers in the 12 months preceding the study. Projective questioning responses and implicit associations relate to farmers' badger killing behavior reported via RRT. Studies evaluating the efficacy of mammal vector culling and vaccination programs should incorporate estimates of non-compliance. Mitigating the conflict concerning badgers as a vector of bTB requires cross-disciplinary scientific research, departure from deep-rooted positions, and the political will to implement evidence-based management.

  10. [Research Progress of Vitreous Humor Detection Technique on Estimation of Postmortem Interval].

    Science.gov (United States)

    Duan, W C; Lan, L M; Guo, Y D; Zha, L; Yan, J; Ding, Y J; Cai, J F

    2018-02-01

    Estimation of postmortem interval (PMI) plays a crucial role in forensic study and identification work. Because of the unique anatomy location, vitreous humor is considered to be used for estima- ting PMI, which has aroused interest among scholars, and some researches have been carried out. The detection techniques of vitreous humor are constantly developed and improved which have been gradually applied in forensic science, meanwhile, the study of PMI estimation using vitreous humor is updated rapidly. This paper reviews various techniques and instruments applied to vitreous humor detection, such as ion selective electrode, capillary ion analysis, spectroscopy, chromatography, nano-sensing technology, automatic biochemical analyser, flow cytometer, etc., as well as the related research progress on PMI estimation in recent years. In order to provide a research direction for scholars and promote a more accurate and efficient application in PMI estimation by vitreous humor analysis, some inner problems are also analysed in this paper. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  11. Innovative techniques for estimating illegal activities in a human-wildlife-management conflict.

    Science.gov (United States)

    Cross, Paul; St John, Freya A V; Khan, Saira; Petroczi, Andrea

    2013-01-01

    Effective management of biological resources is contingent upon stakeholder compliance with rules. With respect to disease management, partial compliance can undermine attempts to control diseases within human and wildlife populations. Estimating non-compliance is notoriously problematic as rule-breakers may be disinclined to admit to transgressions. However, reliable estimates of rule-breaking are critical to policy design. The European badger (Meles meles) is considered an important vector in the transmission and maintenance of bovine tuberculosis (bTB) in cattle herds. Land managers in high bTB prevalence areas of the UK can cull badgers under license. However, badgers are also known to be killed illegally. The extent of illegal badger killing is currently unknown. Herein we report on the application of three innovative techniques (Randomized Response Technique (RRT); projective questioning (PQ); brief implicit association test (BIAT)) for investigating illegal badger killing by livestock farmers across Wales. RRT estimated that 10.4% of farmers killed badgers in the 12 months preceding the study. Projective questioning responses and implicit associations relate to farmers' badger killing behavior reported via RRT. Studies evaluating the efficacy of mammal vector culling and vaccination programs should incorporate estimates of non-compliance. Mitigating the conflict concerning badgers as a vector of bTB requires cross-disciplinary scientific research, departure from deep-rooted positions, and the political will to implement evidence-based management.

  12. Comparison of process estimation techniques for on-line calibration monitoring

    International Nuclear Information System (INIS)

    Shumaker, B. D.; Hashemian, H. M.; Morton, G. W.

    2006-01-01

    The goal of on-line calibration monitoring is to reduce the number of unnecessary calibrations performed each refueling cycle on pressure, level, and flow transmitters in nuclear power plants. The effort requires a baseline for determining calibration drift and thereby the need for a calibration. There are two ways to establish the baseline: averaging and modeling. Averaging techniques have proven to be highly successful in the applications when there are a large number of redundant transmitters; but, for systems with little or no redundancy, averaging methods are not always reliable. That is, for non-redundant transmitters, more sophisticated process estimation techniques are needed to augment or replace the averaging techniques. This paper explores three well-known process estimation techniques; namely Independent Component Analysis (ICA), Auto-Associative Neural Networks (AANN), and Auto-Associative Kernel Regression (AAKR). Using experience and data from an operating nuclear plant, the paper will present an evaluation of the effectiveness of these methods in detecting transmitter drift in actual plant conditions. (authors)

  13. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  14. Development of technique for estimating primary cooling system break diameter in predicting nuclear emergency event sequence

    International Nuclear Information System (INIS)

    Tatebe, Yasumasa; Yoshida, Yoshitaka

    2012-01-01

    If an emergency event occurs in a nuclear power plant, appropriate action is selected and taken in accordance with the plant status, which changes from time to time, in order to prevent escalation and mitigate the event consequences. It is thus important to predict the event sequence and identify the plant behavior resulting from the action taken. In predicting the event sequence during a loss-of-coolant accident (LOCA), it is necessary to estimate break diameter. The conventional method for this estimation is time-consuming, since it involves multiple sensitivity analyses to determine the break diameter that is consistent with the plant behavior. To speed up the process of predicting the nuclear emergency event sequence, a new break diameter estimation technique that is applicable to pressurized water reactors was developed in this study. This technique enables the estimation of break diameter using the plant data sent from the safety parameter display system (SPDS), with focus on the depressurization rate in the reactor cooling system (RCS) during LOCA. The results of LOCA analysis, performed by varying the break diameter using the MAAP4 and RELAP5/MOD3.2 codes, confirmed that the RCS depressurization rate could be expressed by the log linear function of break diameter, except in the case of a small leak, in which RCS depressurization is affected by the coolant charging system and the high-pressure injection system. A correlation equation for break diameter estimation was developed from this function and tested for accuracy. Testing verified that the correlation equation could estimate break diameter accurately within an error of approximately 16%, even if the leak increases gradually, changing the plant status. (author)

  15. Technical Note: On the efficiency of variance reduction techniques for Monte Carlo estimates of imaging noise.

    Science.gov (United States)

    Sharma, Diksha; Sempau, Josep; Badano, Aldo

    2018-02-01

    Monte Carlo simulations require large number of histories to obtain reliable estimates of the quantity of interest and its associated statistical uncertainty. Numerous variance reduction techniques (VRTs) have been employed to increase computational efficiency by reducing the statistical uncertainty. We investigate the effect of two VRTs for optical transport methods on accuracy and computing time for the estimation of variance (noise) in x-ray imaging detectors. We describe two VRTs. In the first, we preferentially alter the direction of the optical photons to increase detection probability. In the second, we follow only a fraction of the total optical photons generated. In both techniques, the statistical weight of photons is altered to maintain the signal mean. We use fastdetect2, an open-source, freely available optical transport routine from the hybridmantis package. We simulate VRTs for a variety of detector models and energy sources. The imaging data from the VRT simulations are then compared to the analog case (no VRT) using pulse height spectra, Swank factor, and the variance of the Swank estimate. We analyze the effect of VRTs on the statistical uncertainty associated with Swank factors. VRTs increased the relative efficiency by as much as a factor of 9. We demonstrate that we can achieve the same variance of the Swank factor with less computing time. With this approach, the simulations can be stopped when the variance of the variance estimates reaches the desired level of uncertainty. We implemented analytic estimates of the variance of Swank factor and demonstrated the effect of VRTs on image quality calculations. Our findings indicate that the Swank factor is dominated by the x-ray interaction profile as compared to the additional uncertainty introduced in the optical transport by the use of VRTs. For simulation experiments that aim at reducing the uncertainty in the Swank factor estimate, any of the proposed VRT can be used for increasing the relative

  16. SIMULTANEOUS ESTIMATION OF PHOTOMETRIC REDSHIFTS AND SED PARAMETERS: IMPROVED TECHNIQUES AND A REALISTIC ERROR BUDGET

    International Nuclear Information System (INIS)

    Acquaviva, Viviana; Raichoor, Anand; Gawiser, Eric

    2015-01-01

    We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties in the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the multi-dimensional probability distribution function in SED fitting + z parameter space, including all correlations. While the performance of joint SED fitting and photo-z estimation might be hindered by template incompleteness, we demonstrate that the latter is “flagged” by a large fraction of outliers in redshift, and that significant improvements can be achieved by using flexible stellar populations synthesis models and more realistic star formation histories. In all cases, we find that the median stellar age is better recovered than the time elapsed from the onset of star formation. Finally, we show that using a photometric redshift code such as EAZY to obtain redshift probability distributions that are then used as priors for SED fitting codes leads to only a modest bias in the SED fitting parameters and is thus a viable alternative to the simultaneous estimation of SED parameters and photometric redshifts

  17. Uncertainty estimates of a GRACE inversion modelling technique over Greenland using a simulation

    Science.gov (United States)

    Bonin, Jennifer; Chambers, Don

    2013-07-01

    The low spatial resolution of GRACE causes leakage, where signals in one location spread out into nearby regions. Because of this leakage, using simple techniques such as basin averages may result in an incorrect estimate of the true mass change in a region. A fairly simple least squares inversion technique can be used to more specifically localize mass changes into a pre-determined set of basins of uniform internal mass distribution. However, the accuracy of these higher resolution basin mass amplitudes has not been determined, nor is it known how the distribution of the chosen basins affects the results. We use a simple `truth' model over Greenland as an example case, to estimate the uncertainties of this inversion method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We determine that an appropriate level of smoothing (300-400 km) and process noise (0.30 cm2 of water) gets the best results. The trends of the Greenland internal basins and Iceland can be reasonably estimated with this method, with average systematic errors of 3.5 cm yr-1 per basin. The largest mass losses found from GRACE RL04 occur in the coastal northwest (-19.9 and -33.0 cm yr-1) and southeast (-24.2 and -27.9 cm yr-1), with small mass gains (+1.4 to +7.7 cm yr-1) found across the northern interior. Acceleration of mass change is measurable at the 95 per cent confidence level in four northwestern basins, but not elsewhere in Greenland. Due to an insufficiently detailed distribution of basins across internal Canada, the trend estimates of Baffin and Ellesmere Islands are expected to be incorrect due to systematic errors caused by the inversion technique.

  18. A method for the estimation of hydration state during hemodialysis using a calf bioimpedance technique

    International Nuclear Information System (INIS)

    Zhu, F; Kuhlmann, M K; Kotanko, P; Seibert, E; Levin, N W; Leonard, E F

    2008-01-01

    Although many methods have been utilized to measure degrees of body hydration, and in particular to estimate normal hydration states (dry weight, DW) in hemodialysis (HD) patients, no accurate methods are currently available for clinical use. Biochemcial measurements are not sufficiently precise and vena cava diameter estimation is impractical. Several bioimpedance methods have been suggested to provide information to estimate clinical hydration and nutritional status, such as phase angle measurement and ratio of body fluid compartment volumes to body weight. In this study, we present a calf bioimpedance spectroscopy (cBIS) technique to monitor calf resistance and resistivity continuously during HD. Attainment of DW is defined by two criteria: (1) the primary criterion is flattening of the change in the resistance curve during dialysis so that at DW little further change is observed and (2) normalized resistivity is in the range of observation of healthy subjects. Twenty maintenance HD patients (12 M/8 F) were studied on 220 occasions. After three baseline (BL) measurements, with patients at their DW prescribed on clinical grounds (DW Clin ), the target post-dialysis weight was gradually decreased in the course of several treatments until the two dry weight criteria outlined above were met (DW cBIS ). Post-dialysis weight was reduced from 78.3 ± 28 to 77.1 ± 27 kg (p −2 Ω m 3 kg −1 (p cBIS was 0.3 ± 0.2%. The results indicate that cBIS utilizing a dynamic technique continuously during dialysis is an accurate and precise approach to specific end points for the estimation of body hydration status. Since no current techniques have been developed to detect DW as precisely, it is suggested as a standard to be evaluated clinically

  19. A technique for estimating 4D-CBCT using prior knowledge and limited-angle projections

    International Nuclear Information System (INIS)

    Zhang, You; Yin, Fang-Fang; Ren, Lei; Segars, W. Paul

    2013-01-01

    Purpose: To develop a technique to estimate onboard 4D-CBCT using prior information and limited-angle projections for potential 4D target verification of lung radiotherapy.Methods: Each phase of onboard 4D-CBCT is considered as a deformation from one selected phase (prior volume) of the planning 4D-CT. The deformation field maps (DFMs) are solved using a motion modeling and free-form deformation (MM-FD) technique. In the MM-FD technique, the DFMs are estimated using a motion model which is extracted from planning 4D-CT based on principal component analysis (PCA). The motion model parameters are optimized by matching the digitally reconstructed radiographs of the deformed volumes to the limited-angle onboard projections (data fidelity constraint). Afterward, the estimated DFMs are fine-tuned using a FD model based on data fidelity constraint and deformation energy minimization. The 4D digital extended-cardiac-torso phantom was used to evaluate the MM-FD technique. A lung patient with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume, including changes of respiration amplitude, lesion size and lesion average-position, and phase shift between lesion and body respiratory cycle. The lesions were contoured in both the estimated and “ground-truth” onboard 4D-CBCT for comparison. 3D volume percentage-difference (VPD) and center-of-mass shift (COMS) were calculated to evaluate the estimation accuracy of three techniques: MM-FD, MM-only, and FD-only. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy.Results: For all simulated patient and projection acquisition scenarios, the mean VPD (±S.D.)/COMS (±S.D.) between lesions in prior images and “ground-truth” onboard images were 136.11% (±42.76%)/15.5 mm (±3.9 mm). Using orthogonal-view 15°-each scan angle, the mean VPD/COMS between the lesion

  20. Food consumption and digestion time estimation of spotted scat, Scatophagus argus, using X-radiography technique

    Energy Technology Data Exchange (ETDEWEB)

    Hashim, Marina; Abidin, Diana Atiqah Zainal [School of Environmental and Natural Resource Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor (Malaysia); Das, Simon K. [Marine Ecosystem Research Centre, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, 43600 UKM Bangi (Malaysia); Ghaffar, Mazlan Abd. [School of Environmental and Natural Resource Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor, Malaysia and Marine Ecosystem Research Centre, Faculty of Science and Technology, Universiti Kebangsaan M (Malaysia)

    2014-09-03

    The present study was conducted to investigate the food consumption pattern and gastric emptying time using x-radiography technique in scats fish, Scatophagus argus feeding to satiation in laboratory conditions. Prior to feeding experiment, fish of various sizes were examined their stomach volume, using freshly prepared stomachs ligatured at the tips of the burret, where the maximum amount of distilled water collected in the stomach were measured (ml). Stomach volume is correlated with maximum food intake (S{sub max}) and it can estimate the maximum stomach distension by allometric model i.e volume=0.0000089W{sup 2.93}. Gastric emptying time was estimated using a qualitative X-radiography technique, where the fish of various sizes were fed to satiation at different time since feeding. All the experimental fish was feed into satiation using radio-opaque barium sulphate (BaSO{sub 4}) paste injected in the wet shrimp in proportion to the body weight. The BaSO{sub 4} was found suitable to track the movement of feed/prey in the stomach over time and gastric emptying time of scats fish can be estimated. The results of qualitative X-Radiography observation of gastric motility, showed the fish (200 gm) that fed to maximum satiation meal (circa 11 gm) completely emptied their stomach within 30 - 36 hrs. The results of the present study will provide the first baseline information on the stomach volume, gastric emptying of scats fish in captivity.

  1. A method for the estimation of hydration state during hemodialysis using a calf bioimpedance technique.

    Science.gov (United States)

    Zhu, F; Kuhlmann, M K; Kotanko, P; Seibert, E; Leonard, E F; Levin, N W

    2008-06-01

    Although many methods have been utilized to measure degrees of body hydration, and in particular to estimate normal hydration states (dry weight, DW) in hemodialysis (HD) patients, no accurate methods are currently available for clinical use. Biochemcial measurements are not sufficiently precise and vena cava diameter estimation is impractical. Several bioimpedance methods have been suggested to provide information to estimate clinical hydration and nutritional status, such as phase angle measurement and ratio of body fluid compartment volumes to body weight. In this study, we present a calf bioimpedance spectroscopy (cBIS) technique to monitor calf resistance and resistivity continuously during HD. Attainment of DW is defined by two criteria: (1) the primary criterion is flattening of the change in the resistance curve during dialysis so that at DW little further change is observed and (2) normalized resistivity is in the range of observation of healthy subjects. Twenty maintenance HD patients (12 M/8 F) were studied on 220 occasions. After three baseline (BL) measurements, with patients at their DW prescribed on clinical grounds (DW(Clin)), the target post-dialysis weight was gradually decreased in the course of several treatments until the two dry weight criteria outlined above were met (DW(cBIS)). Post-dialysis weight was reduced from 78.3 +/- 28 to 77.1 +/- 27 kg (p hydration status. Since no current techniques have been developed to detect DW as precisely, it is suggested as a standard to be evaluated clinically.

  2. Food consumption and digestion time estimation of spotted scat, Scatophagus argus, using X-radiography technique

    Science.gov (United States)

    Hashim, Marina; Abidin, Diana Atiqah Zainal; Das, Simon K.; Ghaffar, Mazlan Abd.

    2014-09-01

    The present study was conducted to investigate the food consumption pattern and gastric emptying time using x-radiography technique in scats fish, Scatophagus argus feeding to satiation in laboratory conditions. Prior to feeding experiment, fish of various sizes were examined their stomach volume, using freshly prepared stomachs ligatured at the tips of the burret, where the maximum amount of distilled water collected in the stomach were measured (ml). Stomach volume is correlated with maximum food intake (Smax) and it can estimate the maximum stomach distension by allometric model i.e volume=0.0000089W2.93. Gastric emptying time was estimated using a qualitative X-radiography technique, where the fish of various sizes were fed to satiation at different time since feeding. All the experimental fish was feed into satiation using radio-opaque barium sulphate (BaSO4) paste injected in the wet shrimp in proportion to the body weight. The BaSO4 was found suitable to track the movement of feed/prey in the stomach over time and gastric emptying time of scats fish can be estimated. The results of qualitative X-Radiography observation of gastric motility, showed the fish (200 gm) that fed to maximum satiation meal (circa 11 gm) completely emptied their stomach within 30 - 36 hrs. The results of the present study will provide the first baseline information on the stomach volume, gastric emptying of scats fish in captivity.

  3. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Estimation of flood environmental effects using flood zone mapping techniques in Halilrood Kerman, Iran.

    Science.gov (United States)

    Boudaghpour, Siamak; Bagheri, Majid; Bagheri, Zahra

    2014-01-01

    High flood occurrences with large environmental damages have a growing trend in Iran. Dynamic movements of water during a flood cause different environmental damages in geographical areas with different characteristics such as topographic conditions. In general, environmental effects and damages caused by a flood in an area can be investigated from different points of view. The current essay is aiming at detecting environmental effects of flood occurrences in Halilrood catchment area of Kerman province in Iran using flood zone mapping techniques. The intended flood zone map was introduced in four steps. Steps 1 to 3 pave the way to calculate and estimate flood zone map in the understudy area while step 4 determines the estimation of environmental effects of flood occurrence. Based on our studies, wide range of accuracy for estimating the environmental effects of flood occurrence was introduced by using of flood zone mapping techniques. Moreover, it was identified that the existence of Jiroft dam in the study area can decrease flood zone from 260 hectares to 225 hectares and also it can decrease 20% of flood peak intensity. As a result, 14% of flood zone in the study area can be saved environmentally.

  5. Food consumption and digestion time estimation of spotted scat, Scatophagus argus, using X-radiography technique

    International Nuclear Information System (INIS)

    Hashim, Marina; Abidin, Diana Atiqah Zainal; Das, Simon K.; Ghaffar, Mazlan Abd.

    2014-01-01

    The present study was conducted to investigate the food consumption pattern and gastric emptying time using x-radiography technique in scats fish, Scatophagus argus feeding to satiation in laboratory conditions. Prior to feeding experiment, fish of various sizes were examined their stomach volume, using freshly prepared stomachs ligatured at the tips of the burret, where the maximum amount of distilled water collected in the stomach were measured (ml). Stomach volume is correlated with maximum food intake (S max ) and it can estimate the maximum stomach distension by allometric model i.e volume=0.0000089W 2.93 . Gastric emptying time was estimated using a qualitative X-radiography technique, where the fish of various sizes were fed to satiation at different time since feeding. All the experimental fish was feed into satiation using radio-opaque barium sulphate (BaSO 4 ) paste injected in the wet shrimp in proportion to the body weight. The BaSO 4 was found suitable to track the movement of feed/prey in the stomach over time and gastric emptying time of scats fish can be estimated. The results of qualitative X-Radiography observation of gastric motility, showed the fish (200 gm) that fed to maximum satiation meal (circa 11 gm) completely emptied their stomach within 30 - 36 hrs. The results of the present study will provide the first baseline information on the stomach volume, gastric emptying of scats fish in captivity

  6. Performance Comparison of Adaptive Estimation Techniques for Power System Small-Signal Stability Assessment

    Directory of Open Access Journals (Sweden)

    E. A. Feilat

    2010-12-01

    Full Text Available This paper demonstrates the assessment of the small-signal stability of a single-machine infinite- bus power system under widely varying loading conditions using the concept of synchronizing and damping torques coefficients. The coefficients are calculated from the time responses of the rotor angle, speed, and torque of the synchronous generator. Three adaptive computation algorithms including Kalman filtering, Adaline, and recursive least squares have been compared to estimate the synchronizing and damping torque coefficients. The steady-state performance of the three adaptive techniques is compared with the conventional static least squares technique by conducting computer simulations at different loading conditions. The algorithms are compared to each other in terms of speed of convergence and accuracy. The recursive least squares estimation offers several advantages including significant reduction in computing time and computational complexity. The tendency of an unsupplemented static exciter to degrade the system damping for medium and heavy loading is verified. Consequently, a power system stabilizer whose parameters are adjusted to compensate for variations in the system loading is designed using phase compensation method. The effectiveness of the stabilizer in enhancing the dynamic stability over wide range of operating conditions is verified through the calculation of the synchronizing and damping torque coefficients using recursive least square technique.

  7. On the estimation of the current density in space plasmas: Multi- versus single-point techniques

    Science.gov (United States)

    Perri, Silvia; Valentini, Francesco; Sorriso-Valvo, Luca; Reda, Antonio; Malara, Francesco

    2017-06-01

    Thanks to multi-spacecraft mission, it has recently been possible to directly estimate the current density in space plasmas, by using magnetic field time series from four satellites flying in a quasi perfect tetrahedron configuration. The technique developed, commonly called ;curlometer; permits a good estimation of the current density when the magnetic field time series vary linearly in space. This approximation is generally valid for small spacecraft separation. The recent space missions Cluster and Magnetospheric Multiscale (MMS) have provided high resolution measurements with inter-spacecraft separation up to 100 km and 10 km, respectively. The former scale corresponds to the proton gyroradius/ion skin depth in ;typical; solar wind conditions, while the latter to sub-proton scale. However, some works have highlighted an underestimation of the current density via the curlometer technique with respect to the current computed directly from the velocity distribution functions, measured at sub-proton scales resolution with MMS. In this paper we explore the limit of the curlometer technique studying synthetic data sets associated to a cluster of four artificial satellites allowed to fly in a static turbulent field, spanning a wide range of relative separation. This study tries to address the relative importance of measuring plasma moments at very high resolution from a single spacecraft with respect to the multi-spacecraft missions in the current density evaluation.

  8. Signal Processing of Ground Penetrating Radar Using Spectral Estimation Techniques to Estimate the Position of Buried Targets

    Directory of Open Access Journals (Sweden)

    Shanker Man Shrestha

    2003-11-01

    Full Text Available Super-resolution is very important for the signal processing of GPR (ground penetration radar to resolve closely buried targets. However, it is not easy to get high resolution as GPR signals are very weak and enveloped by the noise. The MUSIC (multiple signal classification algorithm, which is well known for its super-resolution capacity, has been implemented for signal and image processing of GPR. In addition, conventional spectral estimation technique, FFT (fast Fourier transform, has also been implemented for high-precision receiving signal level. In this paper, we propose CPM (combined processing method, which combines time domain response of MUSIC algorithm and conventional IFFT (inverse fast Fourier transform to obtain a super-resolution and high-precision signal level. In order to support the proposal, detailed simulation was performed analyzing SNR (signal-to-noise ratio. Moreover, a field experiment at a research field and a laboratory experiment at the University of Electro-Communications, Tokyo, were also performed for thorough investigation and supported the proposed method. All the simulation and experimental results are presented.

  9. Estimation of endogenous faecal calcium in buffalo (BOS bubalis) by isotope dilution technique

    International Nuclear Information System (INIS)

    Singh, S.; Sareen, V.K.; Marwaha, S.R.; Sekhon, B.; Bhatia, I.S.

    1973-01-01

    Detailed investigations on the isotope-dilution technique for the estimation of endogenous faecal calcium were conducted with buffalo calves fed on growing ration. The ration consisted of wheat straw, green lucerne and concentrate mix. The endogenous faecal calcium was 3.71 g/day, which is 17.8 percent of the total faecal calcium. The apparent and true digestibilities of Ca were calculated as 51 and 60 percent respectively. The endogenous faecal calcium can be estimated in buffalo calves by giving single subcutaneous injection of Ca 45 and collecting blood samples on 12th and 21st days only, and representative sample from the faeces collected from 13th through 22nd day after the injection. (author)

  10. A new technique for fire risk estimation in the wildland urban interface

    Science.gov (United States)

    Dasgupta, S.; Qu, J. J.; Hao, X.

    A novel technique based on the physical variable of pre-ignition energy is proposed for assessing fire risk in the Grassland-Urban-Interface The physical basis lends meaning a site and season independent applicability possibilities for computing spread rates and ignition probabilities features contemporary fire risk indices usually lack The method requires estimates of grass moisture content and temperature A constrained radiative-transfer inversion scheme on MODIS NIR-SWIR reflectances which reduces solution ambiguity is used for grass moisture retrieval while MODIS land surface temperature emissivity products are used for retrieving grass temperature Subpixel urban contamination of the MODIS reflective and thermal signals over a Grassland-Urban-Interface pixel is corrected using periodic estimates of urban influence from high spatial resolution ASTER

  11. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

    2015-12-31

    Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

  12. Application of genetic algorithm (GA) technique on demand estimation of fossil fuels in Turkey

    International Nuclear Information System (INIS)

    Canyurt, Olcay Ersel; Ozturk, Harun Kemal

    2008-01-01

    The main objective is to investigate Turkey's fossil fuels demand, projection and supplies by using the structure of the Turkish industry and economic conditions. This study develops scenarios to analyze fossil fuels consumption and makes future projections based on a genetic algorithm (GA). The models developed in the nonlinear form are applied to the coal, oil and natural gas demand of Turkey. Genetic algorithm demand estimation models (GA-DEM) are developed to estimate the future coal, oil and natural gas demand values based on population, gross national product, import and export figures. It may be concluded that the proposed models can be used as alternative solutions and estimation techniques for the future fossil fuel utilization values of any country. In the study, coal, oil and natural gas consumption of Turkey are projected. Turkish fossil fuel demand is increased dramatically. Especially, coal, oil and natural gas consumption values are estimated to increase almost 2.82, 1.73 and 4.83 times between 2000 and 2020. In the figures GA-DEM results are compared with World Energy Council Turkish National Committee (WECTNC) projections. The observed results indicate that WECTNC overestimates the fossil fuel consumptions. (author)

  13. Early cost estimating for road construction projects using multiple regression techniques

    Directory of Open Access Journals (Sweden)

    Ibrahim Mahamid

    2011-12-01

    Full Text Available The objective of this study is to develop early cost estimating models for road construction projects using multiple regression techniques, based on 131 sets of data collected in the West Bank in Palestine. As the cost estimates are required at early stages of a project, considerations were given to the fact that the input data for the required regression model could be easily extracted from sketches or scope definition of the project. 11 regression models are developed to estimate the total cost of road construction project in US dollar; 5 of them include bid quantities as input variables and 6 include road length and road width. The coefficient of determination r2 for the developed models is ranging from 0.92 to 0.98 which indicate that the predicted values from a forecast models fit with the real-life data. The values of the mean absolute percentage error (MAPE of the developed regression models are ranging from 13% to 31%, the results compare favorably with past researches which have shown that the estimate accuracy in the early stages of a project is between ±25% and ±50%.

  14. An experimental result of estimating an application volume by machine learning techniques.

    Science.gov (United States)

    Hasegawa, Tatsuhito; Koshino, Makoto; Kimura, Haruhiko

    2015-01-01

    In this study, we improved the usability of smartphones by automating a user's operations. We developed an intelligent system using machine learning techniques that periodically detects a user's context on a smartphone. We selected the Android operating system because it has the largest market share and highest flexibility of its development environment. In this paper, we describe an application that automatically adjusts application volume. Adjusting the volume can be easily forgotten because users need to push the volume buttons to alter the volume depending on the given situation. Therefore, we developed an application that automatically adjusts the volume based on learned user settings. Application volume can be set differently from ringtone volume on Android devices, and these volume settings are associated with each specific application including games. Our application records a user's location, the volume setting, the foreground application name and other such attributes as learning data, thereby estimating whether the volume should be adjusted using machine learning techniques via Weka.

  15. Location estimation in wireless sensor networks using spring-relaxation technique.

    Science.gov (United States)

    Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M

    2010-01-01

    Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  16. Location Estimation in Wireless Sensor Networks Using Spring-Relaxation Technique

    Directory of Open Access Journals (Sweden)

    Qing Zhang

    2010-05-01

    Full Text Available Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN. Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  17. Comparison of Available Bandwidth Estimation Techniques in Packet-Switched Mobile Networks

    DEFF Research Database (Denmark)

    López Villa, Dimas; Ubeda Castellanos, Carlos; Teyeb, Oumer Mohammed

    2006-01-01

    The relative contribution of the transport network towards the per-user capacity in mobile telecommunication systems is becoming very important due to the ever increasing air-interface data rates. Thus, resource management procedures such as admission, load and handover control can make use...... of information regarding the available bandwidth in the transport network, as it could end up being the bottleneck rather than the air interface. This paper provides a comparative study of three well known available bandwidth estimation techniques, i.e. TOPP, SLoPS and pathChirp, taking into account...

  18. A Cost Estimation Analysis of U.S. Navy Ship Fuel-Savings Techniques and Technologies

    Science.gov (United States)

    2009-09-01

    Horngren , C. T., Datar, S . M., & Foster, G. (2006). Cost Accounting : A Managerial Emphasis. 12th ed. Saddle River, NJ: Pearson...COVERED Master’s Thesis 4. TITLE AND SUBTITLE A Cost Estimation Analysis of U.S. Navy Ship Fuel-Savings Techniques and Technologies 6. AUTHOR( S ...FY12 FY13 FY14 FY15 FY16 FY17 FY18 N P V   C u m   S a v i n g s   ( $ / y r / S D   s h i p s ) Time

  19. Incorrectly Interpreting the Carbon Mass Balance Technique Leads to Biased Emissions Estimates from Global Vegetation Fires

    Science.gov (United States)

    Surawski, N. C.; Sullivan, A. L.; Roxburgh, S. H.; Meyer, M.; Polglase, P. J.

    2016-12-01

    Vegetation fires are a complex phenomenon and have a range of global impacts including influences on climate. Even though fire is a necessary disturbance for the maintenance of some ecosystems, a range of anthropogenically deleterious consequences are associated with it, such as damage to assets and infrastructure, loss of life, as well as degradation to air quality leading to negative impacts on human health. Estimating carbon emissions from fire relies on a carbon mass balance technique which has evolved with two different interpretations in the fire emissions community. Databases reporting global fire emissions estimates use an approach based on `consumed biomass' which is an approximation to the biogeochemically correct `burnt carbon' approach. Disagreement between the two methods occurs because the `consumed biomass' accounting technique assumes that all burnt carbon is volatilized and emitted. By undertaking a global review of the fraction of burnt carbon emitted to the atmosphere, we show that the `consumed biomass' accounting approach overestimates global carbon emissions by 4.0%, or 100 Teragrams, annually. The required correction is significant and represents 9% of the net global forest carbon sink estimated annually. To correctly partition burnt carbon between that emitted to the atmosphere and that remaining as a post-fire residue requires the post-burn carbon content to be estimated, which is quite often not undertaken in atmospheric emissions studies. To broaden our understanding of ecosystem carbon fluxes, it is recommended that the change in carbon content associated with burnt residues be accounted for. Apart from correctly partitioning burnt carbon between the emitted and residue pools, it enables an accounting approach which can assess the efficacy of fire management operations targeted at sequestering carbon from fire. These findings are particularly relevant for the second commitment period for the Kyoto protocol, since improved landscape fire

  20. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    Science.gov (United States)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  1. Field Application of Cable Tension Estimation Technique Using the h-SI Method

    Directory of Open Access Journals (Sweden)

    Myung-Hyun Noh

    2015-01-01

    Full Text Available This paper investigates field applicability of a new system identification technique of estimating tensile force for a cable of long span bridges. The newly proposed h-SI method using the combination of the sensitivity updating algorithm and the advanced hybrid microgenetic algorithm can allow not only avoiding the trap of local minimum at initial searching stage but also finding the optimal solution in terms of better numerical efficiency than existing methods. First, this paper overviews the procedure of tension estimation through a theoretical formulation. Secondly, the validity of the proposed technique is numerically examined using a set of dynamic data obtained from benchmark numerical samples considering the effect of sag extensibility and bending stiffness of a sag-cable system. Finally, the feasibility of the proposed method is investigated through actual field data extracted from a cable-stayed Seohae Bridge. The test results show that the existing methods require precise initial data in advance but the proposed method is not affected by such initial information. In particular, the proposed method can improve accuracy and convergence rate toward final values. Consequently, the proposed method can be more effective than existing methods in terms of characterizing the tensile force variation for cable structures.

  2. Estimation of Postmortem Interval Using the Radiological Techniques, Computed Tomography: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Jiulin Wang

    2017-01-01

    Full Text Available Estimation of postmortem interval (PMI has been an important and difficult subject in the forensic study. It is a primary task of forensic work, and it can help guide the work in field investigation. With the development of computed tomography (CT technology, CT imaging techniques are now being more frequently applied to the field of forensic medicine. This study used CT imaging techniques to observe area changes in different tissues and organs of rabbits after death and the changing pattern of the average CT values in the organs. The study analyzed the relationship between the CT values of different organs and PMI with the imaging software Max Viewer and obtained multiparameter nonlinear regression equation of the different organs, and the study provided an objective and accurate method and reference information for the estimation of PMI in the forensic medicine. In forensic science, PMI refers to the time interval between the discovery or inspection of corpse and the time of death. CT, magnetic resonance imaging, and other imaging techniques have become important means of clinical examinations over the years. Although some scholars in our country have used modern radiological techniques in various fields of forensic science, such as estimation of injury time, personal identification of bodies, analysis of the cause of death, determination of the causes of injury, and identification of the foreign substances of bodies, there are only a few studies on the estimation of time of death. We detected the process of subtle changes in adult rabbits after death, the shape and size of tissues and organs, and the relationship between adjacent organs in three-dimensional space in an effort to develop new method for the estimation of PMI. The bodies of the dead rabbits were stored at 20°C room temperature, sealed condition, and prevented exposure to flesh flies. The dead rabbits were randomly divided into comparison group and experimental group. The whole

  3. Accounting for estimated IQ in neuropsychological test performance with regression-based techniques.

    Science.gov (United States)

    Testa, S Marc; Winicki, Jessica M; Pearlson, Godfrey D; Gordon, Barry; Schretlen, David J

    2009-11-01

    Regression-based normative techniques account for variability in test performance associated with multiple predictor variables and generate expected scores based on algebraic equations. Using this approach, we show that estimated IQ, based on oral word reading, accounts for 1-9% of the variability beyond that explained by individual differences in age, sex, race, and years of education for most cognitive measures. These results confirm that adding estimated "premorbid" IQ to demographic predictors in multiple regression models can incrementally improve the accuracy with which regression-based norms (RBNs) benchmark expected neuropsychological test performance in healthy adults. It remains to be seen whether the incremental variance in test performance explained by estimated "premorbid" IQ translates to improved diagnostic accuracy in patient samples. We describe these methods, and illustrate the step-by-step application of RBNs with two cases. We also discuss the rationale, assumptions, and caveats of this approach. More broadly, we note that adjusting test scores for age and other characteristics might actually decrease the accuracy with which test performance predicts absolute criteria, such as the ability to drive or live independently.

  4. Utilization the nuclear techniques use to estimate the water erosion in tobacco plantations in Cuba

    International Nuclear Information System (INIS)

    Gil, Reinaldo H.; Peralta, José L.; Carrazana, Jorge; Fleitas, Gema; Aguilar, Yulaidis; Rivero, Mario; Morejón, Yilian M.; Oliveira, Jorge

    2015-01-01

    Soil erosion is a relevant factor in land degradation, causing several negative impacts to different levels in the environment, agriculture, etc. The tobacco plantations in the western part of the country have been negatively affected by the water erosion due to natural and human factors. For the implementation of a strategy for sustainable land management a key element is to quantify the soil losses in order to establish policies for soil conservation. The nuclear techniques have advantages in comparison with the traditional methods to assess soil erosion and have been applied in different agricultural settings worldwide. The tobacco cultivation in Pinar del Río is placed on soils with high erosion levels, therefore is important to apply techniques which support the soil erosion rate quantification. This work shows the use of "1"3"7Cs technique to characterize the soil erosion status in two sectors in a farm with tobacco plantations located in the south-western plain of Pinar del Rio province. The sampling strategy included the evaluation of selected transects in the slope direction for the studied site. The soil samples were collected in order to incorporate the whole "1"3"7Cs profile. Different conversion models were applied and the Mass Balance Model II provided the more representative results, estimating the soil erosion rate from –18,28 to 8,15 t ha"-"1año"-"1. (author)

  5. Estimation of Apple Volume and Its Shape Indentation Using Image Processing Technique and Neural Network

    Directory of Open Access Journals (Sweden)

    M Jafarlou

    2014-04-01

    Full Text Available Physical properties of agricultural products such as volume are the most important parameters influencing grading and packaging systems. They should be measured accurately as they are considered for any good system design. Image processing and neural network techniques are both non-destructive and useful methods which are recently used for such purpose. In this study, the images of apples were captured from a constant distance and then were processed in MATLAB software and the edges of apple images were extracted. The interior area of apple image was divided into some thin trapezoidal elements perpendicular to longitudinal axis. Total volume of apple was estimated by the summation of incremental volumes of these elements revolved around the apple’s longitudinal axis. The picture of half cut apple was also captured in order to obtain the apple shape’s indentation volume, which was subtracted from the previously estimated total volume of apple. The real volume of apples was measured using water displacement method and the relation between the real volume and estimated volume was obtained. The t-test and Bland-Altman indicated that the difference between the real volume and the estimated volume was not significantly different (p>0.05 i.e. the mean difference was 1.52 cm3 and the accuracy of measurement was 92%. Utilizing neural network with input variables of dimension and mass has increased the accuracy up to 97% and the difference between the mean of volumes decreased to 0.7 cm3.

  6. Efficient Ensemble State-Parameters Estimation Techniques in Ocean Ecosystem Models: Application to the North Atlantic

    Science.gov (United States)

    El Gharamti, M.; Bethke, I.; Tjiputra, J.; Bertino, L.

    2016-02-01

    Given the recent strong international focus on developing new data assimilation systems for biological models, we present in this comparative study the application of newly developed state-parameters estimation tools to an ocean ecosystem model. It is quite known that the available physical models are still too simple compared to the complexity of the ocean biology. Furthermore, various biological parameters remain poorly unknown and hence wrong specifications of such parameters can lead to large model errors. Standard joint state-parameters augmentation technique using the ensemble Kalman filter (Stochastic EnKF) has been extensively tested in many geophysical applications. Some of these assimilation studies reported that jointly updating the state and the parameters might introduce significant inconsistency especially for strongly nonlinear models. This is usually the case for ecosystem models particularly during the period of the spring bloom. A better handling of the estimation problem is often carried out by separating the update of the state and the parameters using the so-called Dual EnKF. The dual filter is computationally more expensive than the Joint EnKF but is expected to perform more accurately. Using a similar separation strategy, we propose a new EnKF estimation algorithm in which we apply a one-step-ahead smoothing to the state. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. Unlike the classical filtering path, the new scheme starts with an update step and later a model propagation step is performed. We test the performance of the new smoothing-based schemes against the standard EnKF in a one-dimensional configuration of the Norwegian Earth System Model (NorESM) in the North Atlantic. We use nutrients profile (up to 2000 m deep) data and surface partial CO2 measurements from Mike weather station (66o N, 2o E) to estimate

  7. The accelerator contains the tritium to discard the spirit decontamination system technique the index sigens estimation

    International Nuclear Information System (INIS)

    Yang Haisu

    2005-10-01

    According to the basic demand of the accelerator application, the main properties and technological constants about building purificatory system of exhaust gas mixed with T (tritium) are analysed and estimated in detail. The system can be operated on the high-flux neutron produce instrument. The vent amount of exhaust gas mixed with T exceeds 4 m 3 /d. The maximal consistency of T approximately is 1 x 10 12 Bq/m 3 , so the parameter of eliminating T should exceed 1 x 10 3 . To adopt the purificatory technique included catalyzing, oxidation, molecular filtration and adsorption is suggested, which is widely used in inland and oversea. In structure, the in and out strobe and a full-automatic intellectual control plus man-control three rank tandem purifying system are adopted to the T density on-line supervisory control devices. (authors)

  8. Estimation of trace elements in some anti-diabetic medicinal plants using PIXE technique

    International Nuclear Information System (INIS)

    Naga Raju, G.J.; Sarita, P.; Ramana Murty, G.A.V.; Ravi Kumar, M.; Seetharami Reddy, B.; John Charles, M.; Lakshminarayana, S.; Seshi Reddy, T.; Reddy, S. Bhuloka; Vijayan, V.

    2006-01-01

    Trace elemental analysis was carried out in various parts of some anti-diabetic medicinal plants using PIXE technique. A 3 MeV proton beam was used to excite the samples. The elements Cl, K, Ca, Ti, Cr, Mn, Fe, Ni, Cu, Zn, Br, Rb and Sr were identified and their concentrations were estimated. The results of the present study provide justification for the usage of these medicinal plants in the treatment of diabetes mellitus (DM) since they are found to contain appreciable amounts of the elements K, Ca, Cr, Mn, Cu, and Zn, which are responsible for potentiating insulin action. Our results show that the analyzed medicinal plants can be considered as potential sources for providing a reasonable amount of the required elements other than diet to the patients of DM. Moreover, these results can be used to set new standards for prescribing the dosage of the herbal drugs prepared from these plant materials

  9. Intercomparison of techniques for estimation of sedimentation rate in the Sabah and Sarawak coastal waters

    International Nuclear Information System (INIS)

    Zal U'yun Wan Mahmood; Zaharudin Ahmad; Abdul Kadir Ishak; Che Abd Rahim Mohamed

    2011-01-01

    A total of eight sediment cores with 50 cm length were taken in the Sabah and Sarawak coastal waters using a gravity corer in 2004 to estimate sedimentation rates using four mathematical models of CIC, Shukla-CIC, CRS and ADE. The average of sedimentation rate ranged from 0.24 to 0.48 cm year -1 , which is calculated based on the vertical profile of 210 Pbex in sediment core. The finding also showed that the sedimentation rates derived from four models were generally shown in good agreement with similar or comparable value at some stations. However, based on statistical analysis of paired sample t-test indicated that CIC model was the most accurate, reliable and suitable technique to determine the sedimentation rate in the coastal area. (author)

  10. 40Ar-39Ar method for age estimation: principles, technique and application in orogenic regions

    International Nuclear Information System (INIS)

    Dalmejer, R.

    1984-01-01

    A variety of the K-Ar method for age estimation by 40 Ar/ 39 Ar recently developed is described. This method doesn't require direct analysis of potassium, its content is calculated as a function of 39 Ar, which is formed from 39 K under neutron activation. Errors resulted from interactions between potassium and calcium nuclei with neutrons are considered. The attention is paid to the technique of gradual heating, used in 40 Ar- 39 Ar method, and of obtaining age spectrum. Aplicabilities of isochronous diagram is discussed for the case of presence of excessive argon in a sample. Examples of 40 Ar- 39 Ar method application for dating events in orogenic regions are presented

  11. A pilot study of a simple screening technique for estimation of salivary flow.

    Science.gov (United States)

    Kanehira, Takashi; Yamaguchi, Tomotaka; Takehara, Junji; Kashiwazaki, Haruhiko; Abe, Takae; Morita, Manabu; Asano, Kouzo; Fujii, Yoshinori; Sakamoto, Wataru

    2009-09-01

    The purpose of this study was to develop a simple screening technique for estimation of salivary flow and to test the usefulness of the method for determining decreased salivary flow. A novel assay system comprising 3 spots containing 30 microg starch and 49.6 microg potassium iodide per spot on filter paper and a coloring reagent, based on the color reaction of iodine-starch and theory of paper chromatography, was designed. We investigated the relationship between resting whole salivary rates and the number of colored spots on the filter produced by 41 hospitalized subjects. A significant negative correlation was observed between the number of colored spots and the resting salivary flow rate (n = 41; r = -0.803; P bedridden and disabled elderly people.

  12. Estimation of gastric emptying time (GET) in clownfish (Amphiprion ocellaris) using X-radiography technique

    Energy Technology Data Exchange (ETDEWEB)

    Ling, Khoo Mei; Ghaffar, Mazlan Abd. [School of Environmental and Natural Resource Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor (Malaysia)

    2014-09-03

    This study examines the movement of food item and the estimation of gastric emptying time using the X-radiography techniques, in the clownfish (Amphiprion ocellaris) fed in captivity. Fishes were voluntarily fed to satiation after being deprived of food for 72 hours, using pellets that were tampered with barium sulphate (BaSO{sub 4}). The movement of food item was monitored over different time of feeding. As a result, a total of 36 hours were needed for the food items to be evacuated completely from the stomach. Results on the modeling of meal satiation were also discussed. The size of satiation meal to body weight relationship was allometric, with the power value equal to 1.28.

  13. A Technique for Estimating Intensity of Emotional Expressions and Speaking Styles in Speech Based on Multiple-Regression HSMM

    Science.gov (United States)

    Nose, Takashi; Kobayashi, Takao

    In this paper, we propose a technique for estimating the degree or intensity of emotional expressions and speaking styles appearing in speech. The key idea is based on a style control technique for speech synthesis using a multiple regression hidden semi-Markov model (MRHSMM), and the proposed technique can be viewed as the inverse of the style control. In the proposed technique, the acoustic features of spectrum, power, fundamental frequency, and duration are simultaneously modeled using the MRHSMM. We derive an algorithm for estimating explanatory variables of the MRHSMM, each of which represents the degree or intensity of emotional expressions and speaking styles appearing in acoustic features of speech, based on a maximum likelihood criterion. We show experimental results to demonstrate the ability of the proposed technique using two types of speech data, simulated emotional speech and spontaneous speech with different speaking styles. It is found that the estimated values have correlation with human perception.

  14. Use of environmental isotope tracer and GIS techniques to estimate basin recharge

    Science.gov (United States)

    Odunmbaku, Abdulganiu A. A.

    The extensive use of ground water only began with the advances in pumping technology at the early portion of 20th Century. Groundwater provides the majority of fresh water supply for municipal, agricultural and industrial uses, primarily because of little to no treatment it requires. Estimating the volume of groundwater available in a basin is a daunting task, and no accurate measurements can be made. Usually water budgets and simulation models are primarily used to estimate the volume of water in a basin. Precipitation, land surface cover and subsurface geology are factors that affect recharge; these factors affect percolation which invariably affects groundwater recharge. Depending on precipitation, soil chemistry, groundwater chemical composition, gradient and depth, the age and rate of recharge can be estimated. This present research proposes to estimate the recharge in Mimbres, Tularosa and Diablo Basin using the chloride environmental isotope; chloride mass-balance approach and GIS. It also proposes to determine the effect of elevation on recharge rate. Mimbres and Tularosa Basin are located in southern New Mexico State, and extend southward into Mexico. Diablo Basin is located in Texas in extends southward. This research utilizes the chloride mass balance approach to estimate the recharge rate through collection of groundwater data from wells, and precipitation. The data were analysed statistically to eliminate duplication, outliers, and incomplete data. Cluster analysis, piper diagram and statistical significance were performed on the parameters of the groundwater; the infiltration rate was determined using chloride mass balance technique. The data was then analysed spatially using ArcGIS10. Regions of active recharge were identified in Mimbres and Diablo Basin, but this could not be clearly identified in Tularosa Basin. CMB recharge for Tularosa Basin yields 0.04037mm/yr (0.0016in/yr), Diablo Basin was 0.047mm/yr (0.0016 in/yr), and 0.2153mm/yr (0.00848in

  15. Estimating the vibration level of an L-shaped beam using power flow techniques

    Science.gov (United States)

    Cuschieri, J. M.; Mccollum, M.; Rassineux, J. L.; Gilbert, T.

    1986-01-01

    The response of one component of an L-shaped beam, with point force excitation on the other component, is estimated using the power flow method. The transmitted power from the source component to the receiver component is expressed in terms of the transfer and input mobilities at the excitation point and the joint. The response is estimated both in narrow frequency bands, using the exact geometry of the beams, and as a frequency averaged response using infinite beam models. The results using this power flow technique are compared to the results obtained using finite element analysis (FEA) of the L-shaped beam for the low frequency response and to results obtained using statistical energy analysis (SEA) for the high frequencies. The agreement between the FEA results and the power flow method results at low frequencies is very good. SEA results are in terms of frequency averaged levels and these are in perfect agreement with the results obtained using the infinite beam models in the power flow method. The narrow frequency band results from the power flow method also converge to the SEA results at high frequencies. The advantage of the power flow method is that detail of the response can be retained while reducing computation time, which will allow the narrow frequency band analysis of the response to be extended to higher frequencies.

  16. Heart Failure: Diagnosis, Severity Estimation and Prediction of Adverse Events Through Machine Learning Techniques

    Directory of Open Access Journals (Sweden)

    Evanthia E. Tripoliti

    Full Text Available Heart failure is a serious condition with high prevalence (about 2% in the adult population in developed countries, and more than 8% in patients older than 75 years. About 3–5% of hospital admissions are linked with heart failure incidents. Heart failure is the first cause of admission by healthcare professionals in their clinical practice. The costs are very high, reaching up to 2% of the total health costs in the developed countries. Building an effective disease management strategy requires analysis of large amount of data, early detection of the disease, assessment of the severity and early prediction of adverse events. This will inhibit the progression of the disease, will improve the quality of life of the patients and will reduce the associated medical costs. Toward this direction machine learning techniques have been employed. The aim of this paper is to present the state-of-the-art of the machine learning methodologies applied for the assessment of heart failure. More specifically, models predicting the presence, estimating the subtype, assessing the severity of heart failure and predicting the presence of adverse events, such as destabilizations, re-hospitalizations, and mortality are presented. According to the authors' knowledge, it is the first time that such a comprehensive review, focusing on all aspects of the management of heart failure, is presented. Keywords: Heart failure, Diagnosis, Prediction, Severity estimation, Classification, Data mining

  17. Nonparametric statistical techniques used in dose estimation for beagles exposed to inhaled plutonium nitrate

    International Nuclear Information System (INIS)

    Stevens, D.L.; Dagle, G.E.

    1986-01-01

    Retention and translocation of inhaled radionuclides are often estimated from the sacrifice of multiple animals at different time points. The data for each time point can be averaged and a smooth curve fitted to the mean values, or a smooth curve may be fitted to the entire data set. However, an analysis based on means may not be the most appropriate if there is substantial variation in the initial amount of the radionuclide inhaled or if the data are subject to outliers. A method has been developed that takes account of these problems. The body burden is viewed as a compartmental system, with the compartments identified with body organs. A median polish is applied to the multiple logistic transform of the compartmental fractions (compartment burden/total burden) at each time point. A smooth function is fitted to the results of the median polish. This technique was applied to data from beagles exposed to an aerosol of 239 Pu(NO 3 ) 4 . Models of retention and translocation for lungs, skeleton, liver, kidneys, and tracheobronchial lymph nodes were developed and used to estimate dose. 4 refs., 3 figs., 4 tabs

  18. The Use of Coupled Code Technique for Best Estimate Safety Analysis of Nuclear Power Plants

    International Nuclear Information System (INIS)

    Bousbia Salah, A.; D'Auria, F.

    2006-01-01

    Issues connected with the thermal-hydraulics and neutronics of nuclear plants still challenge the design, safety and the operation of Light Water nuclear Reactors (LWR). The lack of full understanding of complex mechanisms related to the interaction between these issues imposed the adoption of conservative safety limits. Those safety margins put restrictions on the optimal exploitation of the plants and consequently reduced economic profit of the plant. In the light of the sustained development in computer technology, the possibilities of code capabilities have been enlarged substantially. Consequently, advanced safety evaluations and design optimizations that were not possible few years ago can now be performed. In fact, during the last decades Best Estimate (BE) neutronic and thermal-hydraulic calculations were so far carried out following rather parallel paths with only few interactions between them. Nowadays, it becomes possible to switch to new generation of computational tools, namely, Coupled Code technique. The application of such method is mandatory for the analysis of accident conditions where strong coupling between the core neutronics and the primary circuit thermal-hydraulics, and more especially when asymmetrical processes take place in the core leading to local space-dependent power generation. Through the current study, a demonstration of the maturity level achieved in the calculation of 3-D core performance during complex accident scenarios in NPPs is emphasized. Typical applications are outlined and discussed showing the main features and limitations of this technique. (author)

  19. Estimation of Shie Glacier Surface Movement Using Offset Tracking Technique with Cosmo-Skymed Images

    Science.gov (United States)

    Wang, Q.; Zhou, W.; Fan, J.; Yuan, W.; Li, H.; Sousa, J. J.; Guo, Z.

    2017-09-01

    Movement is one of the most important characteristics of glaciers which can cause serious natural disasters. For this reason, monitoring this massive blocks is a crucial task. Synthetic Aperture Radar (SAR) can operate all day in any weather conditions and the images acquired by SAR contain intensity and phase information, which are irreplaceable advantages in monitoring the surface movement of glaciers. Moreover, a variety of techniques like DInSAR and offset tracking, based on the information of SAR images, could be applied to measure the movement. Sangwang lake, a glacial lake in the Himalayas, has great potentially danger of outburst. Shie glacier is situated at the upstream of the Sangwang lake. Hence, it is significant to monitor Shie glacier surface movement to assess the risk of outburst. In this paper, 6 high resolution COSMO-SkyMed images spanning from August to December, 2016 are applied with offset tracking technique to estimate the surface movement of Shie glacier. The maximum velocity of Shie glacier surface movement is 51 cm/d, which was observed at the end of glacier tongue, and the velocity is correlated with the change of elevation. Moreover, the glacier surface movement in summer is faster than in winter and the velocity decreases as the local temperature decreases. Based on the above conclusions, the glacier may break off at the end of tongue in the near future. The movement results extracted in this paper also illustrate the advantages of high resolution SAR images in monitoring the surface movement of small glaciers.

  20. Estimation the Amount of Oil Palm Trees Production Using Remote Sensing Technique

    Science.gov (United States)

    Fitrianto, A. C.; Tokimatsu, K.; Sufwandika, M.

    2017-12-01

    Currently, fossil fuels were used as the main source of power supply to generate energy including electricity. Depletion in the amount of fossil fuels has been causing the increasing price of crude petroleum and the demand for alternative energy which is renewable and environment-friendly and it is defined from vegetable oils such palm oil, rapeseed and soybean. Indonesia known as the big palm oil producer which is the largest agricultural industry with total harvested oil palm area which is estimated grew until 8.9 million ha in 2015. On the other hand, lack of information about the age of oil palm trees and changes also their spatial distribution is mainly problem for energy planning. This research conducted to estimate fresh fruit bunch (FFB) of oil palm and their distribution using remote sensing technique. Cimulang oil palm plantation was choose as study area. First step, estimated the age of oil palm trees based on their canopy density as the result from Landsat 8 OLI analysis and classified into five class. From this result, we correlated oil palm age with their average FFB production per six months and classified into seed (0-3 years, 0kg), young (4-8 years, 68.77kg), teen (9-14 years, 109.08kg), and mature (14-25 years, 73.91kg). The result from satellite image analysis shows if Cimulang plantation area consist of teen old oil palm trees that it is covers around 81.5% of that area, followed by mature oil palm trees with 18.5% or corresponding to 100 hectares and have total production of FFB every six months around 7,974,787.24 kg.

  1. Basin Visual Estimation Technique (BVET) and Representative Reach Approaches to Wadeable Stream Surveys: Methodological Limitations and Future Directions

    Science.gov (United States)

    Lance R. Williams; Melvin L. Warren; Susan B. Adams; Joseph L. Arvai; Christopher M. Taylor

    2004-01-01

    Basin Visual Estimation Techniques (BVET) are used to estimate abundance for fish populations in small streams. With BVET, independent samples are drawn from natural habitat units in the stream rather than sampling "representative reaches." This sampling protocol provides an alternative to traditional reach-level surveys, which are criticized for their lack...

  2. Development of Flight-Test Performance Estimation Techniques for Small Unmanned Aerial Systems

    Science.gov (United States)

    McCrink, Matthew Henry

    This dissertation provides a flight-testing framework for assessing the performance of fixed-wing, small-scale unmanned aerial systems (sUAS) by leveraging sub-system models of components unique to these vehicles. The development of the sub-system models, and their links to broader impacts on sUAS performance, is the key contribution of this work. The sub-system modeling and analysis focuses on the vehicle's propulsion, navigation and guidance, and airframe components. Quantification of the uncertainty in the vehicle's power available and control states is essential for assessing the validity of both the methods and results obtained from flight-tests. Therefore, detailed propulsion and navigation system analyses are presented to validate the flight testing methodology. Propulsion system analysis required the development of an analytic model of the propeller in order to predict the power available over a range of flight conditions. The model is based on the blade element momentum (BEM) method. Additional corrections are added to the basic model in order to capture the Reynolds-dependent scale effects unique to sUAS. The model was experimentally validated using a ground based testing apparatus. The BEM predictions and experimental analysis allow for a parameterized model relating the electrical power, measurable during flight, to the power available required for vehicle performance analysis. Navigation system details are presented with a specific focus on the sensors used for state estimation, and the resulting uncertainty in vehicle state. Uncertainty quantification is provided by detailed calibration techniques validated using quasi-static and hardware-in-the-loop (HIL) ground based testing. The HIL methods introduced use a soft real-time flight simulator to provide inertial quality data for assessing overall system performance. Using this tool, the uncertainty in vehicle state estimation based on a range of sensors, and vehicle operational environments is

  3. ESTIMATION OF SHIE GLACIER SURFACE MOVEMENT USING OFFSET TRACKING TECHNIQUE WITH COSMO-SKYMED IMAGES

    Directory of Open Access Journals (Sweden)

    Q. Wang

    2017-09-01

    Full Text Available Movement is one of the most important characteristics of glaciers which can cause serious natural disasters. For this reason, monitoring this massive blocks is a crucial task. Synthetic Aperture Radar (SAR can operate all day in any weather conditions and the images acquired by SAR contain intensity and phase information, which are irreplaceable advantages in monitoring the surface movement of glaciers. Moreover, a variety of techniques like DInSAR and offset tracking, based on the information of SAR images, could be applied to measure the movement. Sangwang lake, a glacial lake in the Himalayas, has great potentially danger of outburst. Shie glacier is situated at the upstream of the Sangwang lake. Hence, it is significant to monitor Shie glacier surface movement to assess the risk of outburst. In this paper, 6 high resolution COSMO-SkyMed images spanning from August to December, 2016 are applied with offset tracking technique to estimate the surface movement of Shie glacier. The maximum velocity of Shie glacier surface movement is 51 cm/d, which was observed at the end of glacier tongue, and the velocity is correlated with the change of elevation. Moreover, the glacier surface movement in summer is faster than in winter and the velocity decreases as the local temperature decreases. Based on the above conclusions, the glacier may break off at the end of tongue in the near future. The movement results extracted in this paper also illustrate the advantages of high resolution SAR images in monitoring the surface movement of small glaciers.

  4. A time series deformation estimation in the NW Himalayas using SBAS InSAR technique

    Science.gov (United States)

    Kumar, V.; Venkataraman, G.

    2012-12-01

    A time series land deformation studies in north western Himalayan region has been presented in this study. Synthetic aperture radar (SAR) interferometry (InSAR) is an important tool for measuring the land displacement caused by different geological processes [1]. Frequent spatial and temporal decorrelation in the Himalayan region is a strong impediment in precise deformation estimation using conventional interferometric SAR approach. In such cases, advanced DInSAR approaches PSInSAR as well as Small base line subset (SBAS) can be used to estimate earth surface deformation. The SBAS technique [2] is a DInSAR approach which uses a twelve or more number of repeat SAR acquisitions in different combinations of a properly chosen data (subsets) for generation of DInSAR interferograms using two pass interferometric approach. Finally it leads to the generation of mean deformation velocity maps and displacement time series. Herein, SBAS algorithm has been used for time series deformation estimation in the NW Himalayan region. ENVISAT ASAR IS2 swath data from 2003 to 2008 have been used for quantifying slow deformation. Himalayan region is a very active tectonic belt and active orogeny play a significant role in land deformation process [3]. Geomorphology in the region is unique and reacts to the climate change adversely bringing with land slides and subsidence. Settlements on the hill slopes are prone to land slides, landslips, rockslides and soil creep. These hazardous features have hampered the over all progress of the region as they obstruct the roads and flow of traffic, break communication, block flowing water in stream and create temporary reservoirs and also bring down lot of soil cover and thus add enormous silt and gravel to the streams. It has been observed that average deformation varies from -30.0 mm/year to 10 mm/year in the NW Himalayan region . References [1] Massonnet, D., Feigl, K.L.,Rossi, M. and Adragna, F. (1994) Radar interferometry mapping of

  5. Estimation of Crop Coefficient of Corn (Kccorn under Climate Change Scenarios Using Data Mining Technique

    Directory of Open Access Journals (Sweden)

    Kampanad Bhaktikul

    2012-01-01

    Full Text Available The main objectives of this study are to determine the crop coefficient of corn (Kccorn using data mining technique under climate change scenarios, and to develop the guidelines for future water management based on climate change scenarios. Variables including date, maximum temperature, minimum temperature, precipitation, humidity, wind speed, and solar radiation from seven meteorological stations during 1991 to 2000 were used. Cross-Industry Standard Process for Data Mining (CRISP-DM was applied for data collection and analyses. The procedures compose of investigation of input data, model set up using Artificial Neural Networks (ANNs, model evaluation, and finally estimation of the Kccorn. Three climate change scenarios of carbon dioxide (CO2 concentration level: 360 ppm, 540 ppm, and 720 ppm were set. The results indicated that the best number of node of input layer - hidden layer - output layer was 7-13-1. The correlation coefficient of model was 0.99. The predicted Kccorn revealed that evapotranspiration (ETcorn pattern will be changed significantly upon CO2 concentration level. From the model predictions, ETcorn will be decreased 3.34% when CO2 increased from 360 ppm to 540 ppm. For the double CO2 concentration from 360 ppm to 720 ppm, ETcorn will be increased 16.13%. The future water management guidelines to cope with the climate change are suggested.

  6. Wind Turbine Tower Vibration Modeling and Monitoring by the Nonlinear State Estimation Technique (NSET

    Directory of Open Access Journals (Sweden)

    Peng Guo

    2012-12-01

    Full Text Available With appropriate vibration modeling and analysis the incipient failure of key components such as the tower, drive train and rotor of a large wind turbine can be detected. In this paper, the Nonlinear State Estimation Technique (NSET has been applied to model turbine tower vibration to good effect, providing an understanding of the tower vibration dynamic characteristics and the main factors influencing these. The developed tower vibration model comprises two different parts: a sub-model used for below rated wind speed; and another for above rated wind speed. Supervisory control and data acquisition system (SCADA data from a single wind turbine collected from March to April 2006 is used in the modeling. Model validation has been subsequently undertaken and is presented. This research has demonstrated the effectiveness of the NSET approach to tower vibration; in particular its conceptual simplicity, clear physical interpretation and high accuracy. The developed and validated tower vibration model was then used to successfully detect blade angle asymmetry that is a common fault that should be remedied promptly to improve turbine performance and limit fatigue damage. The work also shows that condition monitoring is improved significantly if the information from the vibration signals is complemented by analysis of other relevant SCADA data such as power performance, wind speed, and rotor loads.

  7. Constitutive error based parameter estimation technique for plate structures using free vibration signatures

    Science.gov (United States)

    Guchhait, Shyamal; Banerjee, Biswanath

    2018-04-01

    In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.

  8. Comparison of volatility function technique for risk-neutral densities estimation

    Science.gov (United States)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-08-01

    Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.

  9. A scintillation camera technique for quantitative estimation of separate kidney function and its use before nephrectomy

    International Nuclear Information System (INIS)

    Larsson, I.; Lindstedt, E.; Ohlin, P.; Strand, S.E.; White, T.

    1975-01-01

    A scintillation camera technique was used for measuring renal uptake of [ 131 I]Hippuran 80-110 s after injection. Externally measured Hippuran uptake was markedly influenced by kidney depth, which was measured by lateral-view image after injection of [ 99 Tc]iron ascorbic acid complex or [ 197 Hg]chlormerodrine. When one kidney was nearer to the dorsal surface of the body than the other, it was necessary to correct the externally measured Hippuran uptake for kidney depth to obtain reliable information on the true partition of Hippuran between the two kidneys. In some patients the glomerular filtration rate (GFR) was measured before and after nephrectomy. Measured postoperative GFR was compared with preoperative predicted GFR, which was calculated by multiplying the preoperative Hippuran uptake of the kidney to be left in situ, as a fraction of the preoperative Hippuran uptake of both kidneys, by the measured preoperative GFR. The measured postoperative GFR was usually moderately higher than the preoperatively predicted GFR. The difference could be explained by a postoperative compensatory increase in function of the remaining kidney. Thus, the present method offers a possibility of estimating separate kidney function without arterial or ureteric catheterization. (auth)

  10. Estimation of coronary wave intensity analysis using noninvasive techniques and its application to exercise physiology.

    Science.gov (United States)

    Broyd, Christopher J; Nijjer, Sukhjinder; Sen, Sayan; Petraco, Ricardo; Jones, Siana; Al-Lamee, Rasha; Foin, Nicolas; Al-Bustami, Mahmud; Sethi, Amarjit; Kaprielian, Raffi; Ramrakha, Punit; Khan, Masood; Malik, Iqbal S; Francis, Darrel P; Parker, Kim; Hughes, Alun D; Mikhail, Ghada W; Mayet, Jamil; Davies, Justin E

    2016-03-01

    Wave intensity analysis (WIA) has found particular applicability in the coronary circulation where it can quantify traveling waves that accelerate and decelerate blood flow. The most important wave for the regulation of flow is the backward-traveling decompression wave (BDW). Coronary WIA has hitherto always been calculated from invasive measures of pressure and flow. However, recently it has become feasible to obtain estimates of these waveforms noninvasively. In this study we set out to assess the agreement between invasive and noninvasive coronary WIA at rest and measure the effect of exercise. Twenty-two patients (mean age 60) with unobstructed coronaries underwent invasive WIA in the left anterior descending artery (LAD). Immediately afterwards, noninvasive LAD flow and pressure were recorded and WIA calculated from pulsed-wave Doppler coronary flow velocity and central blood pressure waveforms measured using a cuff-based technique. Nine of these patients underwent noninvasive coronary WIA assessment during exercise. A pattern of six waves were observed in both modalities. The BDW was similar between invasive and noninvasive measures [peak: 14.9 ± 7.8 vs. -13.8 ± 7.1 × 10(4) W·m(-2)·s(-2), concordance correlation coefficient (CCC): 0.73, P Exercise increased the BDW: at maximum exercise peak BDW was -47.0 ± 29.5 × 10(4) W·m(-2)·s(-2) (P Physiological Society.

  11. Comparison of Estimation Techniques for Vibro-Acoustic Transfer Path Analysis

    Directory of Open Access Journals (Sweden)

    Paulo Eduardo França Padilha

    2006-01-01

    Full Text Available Vibro-acoustic Transfer Path Analysis (TPA is a tool to evaluate the contribution of different energy propagation paths between a source and a receiver, linked to each other by a number of connections. TPA is typically used to quantify and rank the relative importance of these paths in a given frequency band, determining the most significant one to the receiver. Basically, two quantities have to be determined for TPA: the operational forces at each transfer path and the Frequency Response Functions (FRF of these paths. The FRF are obtained either experimentally or analytically, and the influence of the mechanical impedance of the source can be taken into account or not. The operational forces can be directly obtained from measurements using force transducers or indirectly estimated from auxiliary response measurements. Two methods to obtain the operational forces indirectly – the Complex Stiffness Method (CSM and the Matrix Inversion Method (MIM – associated with two possible configurations to determine the FRF – including and excluding the source impedance – are presented and discussed in this paper. The effect of weak and strong coupling among the paths is also commented considering the techniques previously presented. The main conclusion is that, with the source removed, CSM gives more accurate results. On the other hand, with the source present, MIM is preferable. In the latter case, CSM should be used only if there is a high impedance mismatch between the source and the receiver. Both methods are not affected by a higher or lower degree of coupling among the transfer paths.

  12. Modelling and analysis of ozone concentration by artificial intelligent techniques for estimating air quality

    Science.gov (United States)

    Taylan, Osman

    2017-02-01

    High ozone concentration is an important cause of air pollution mainly due to its role in the greenhouse gas emission. Ozone is produced by photochemical processes which contain nitrogen oxides and volatile organic compounds in the lower atmospheric level. Therefore, monitoring and controlling the quality of air in the urban environment is very important due to the public health care. However, air quality prediction is a highly complex and non-linear process; usually several attributes have to be considered. Artificial intelligent (AI) techniques can be employed to monitor and evaluate the ozone concentration level. The aim of this study is to develop an Adaptive Neuro-Fuzzy inference approach (ANFIS) to determine the influence of peripheral factors on air quality and pollution which is an arising problem due to ozone level in Jeddah city. The concentration of ozone level was considered as a factor to predict the Air Quality (AQ) under the atmospheric conditions. Using Air Quality Standards of Saudi Arabia, ozone concentration level was modelled by employing certain factors such as; nitrogen oxide (NOx), atmospheric pressure, temperature, and relative humidity. Hence, an ANFIS model was developed to observe the ozone concentration level and the model performance was assessed by testing data obtained from the monitoring stations established by the General Authority of Meteorology and Environment Protection of Kingdom of Saudi Arabia. The outcomes of ANFIS model were re-assessed by fuzzy quality charts using quality specification and control limits based on US-EPA air quality standards. The results of present study show that the ANFIS model is a comprehensive approach for the estimation and assessment of ozone level and is a reliable approach to produce more genuine outcomes.

  13. Bi Input-extended Kalman filter based estimation technique for speed-sensorless control of induction motors

    International Nuclear Information System (INIS)

    Barut, Murat

    2010-01-01

    This study offers a novel extended Kalman filter (EKF) based estimation technique for the solution of the on-line estimation problem related to uncertainties in the stator and rotor resistances inherent to the speed-sensorless high efficiency control of induction motors (IMs) in the wide speed range as well as extending the limited number of states and parameter estimations possible with a conventional single EKF algorithm. For this aim, the introduced estimation technique in this work utilizes a single EKF algorithm with the consecutive execution of two inputs derived from the two individual extended IM models based on the stator resistance and rotor resistance estimation, differently from the other approaches in past studies, which require two separate EKF algorithms operating in a switching or braided manner; thus, it has superiority over the previous EKF schemes in this regard. The proposed EKF based estimation technique performing the on-line estimations of the stator currents, the rotor flux, the rotor angular velocity, and the load torque involving the viscous friction term together with the rotor and stator resistance is also used in the combination with the speed-sensorless direct vector control of IM and tested with simulations under the challenging 12 scenarios generated instantaneously via step and/or linear variations of the velocity reference, the load torque, the stator resistance, and the rotor resistance in the range of high and zero speed, assuming that the measured stator phase currents and voltages are available. Even under those variations, the performance of the speed-sensorless direct vector control system established on the novel EKF based estimation technique is observed to be quite good.

  14. Bi Input-extended Kalman filter based estimation technique for speed-sensorless control of induction motors

    Energy Technology Data Exchange (ETDEWEB)

    Barut, Murat, E-mail: muratbarut27@yahoo.co [Nigde University, Department of Electrical and Electronics Engineering, 51245 Nigde (Turkey)

    2010-10-15

    This study offers a novel extended Kalman filter (EKF) based estimation technique for the solution of the on-line estimation problem related to uncertainties in the stator and rotor resistances inherent to the speed-sensorless high efficiency control of induction motors (IMs) in the wide speed range as well as extending the limited number of states and parameter estimations possible with a conventional single EKF algorithm. For this aim, the introduced estimation technique in this work utilizes a single EKF algorithm with the consecutive execution of two inputs derived from the two individual extended IM models based on the stator resistance and rotor resistance estimation, differently from the other approaches in past studies, which require two separate EKF algorithms operating in a switching or braided manner; thus, it has superiority over the previous EKF schemes in this regard. The proposed EKF based estimation technique performing the on-line estimations of the stator currents, the rotor flux, the rotor angular velocity, and the load torque involving the viscous friction term together with the rotor and stator resistance is also used in the combination with the speed-sensorless direct vector control of IM and tested with simulations under the challenging 12 scenarios generated instantaneously via step and/or linear variations of the velocity reference, the load torque, the stator resistance, and the rotor resistance in the range of high and zero speed, assuming that the measured stator phase currents and voltages are available. Even under those variations, the performance of the speed-sensorless direct vector control system established on the novel EKF based estimation technique is observed to be quite good.

  15. Uranium solution mining cost estimating technique: means for rapid comparative analysis of deposits

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    Twelve graphs provide a technique for determining relative cost ranges for uranium solution mining projects. The use of the technique can provide a consistent framework for rapid comparative analysis of various properties of mining situations. The technique is also useful to determine the sensitivities of cost figures to incremental changes in mining factors or deposit characteristics

  16. Techniques for the estimation of global irradiation from sunshine duration and global irradiation estimation for Italian locations

    International Nuclear Information System (INIS)

    Jain, P.C.

    1984-04-01

    Angstrom equation H=H 0 (a+bS/S 0 ) has been fitted using the least-square method to the global irradiation and the sunshine duration data of 31 Italian locations for the duration 1965-1974. Three more linear equations: i) the equation H'=H 0 (a+bS/S 0 ), obtained by incorporating the effect of the multiple reflections between the earth's surface and the atmosphere, ii) the equation H=H 0 (a+bS/S' 0 ), obtained by incorporating the effect of not burning of the sunshine recorder chart when the elevation of the sun is less than 5 deg., and iii) the equation H'=H 0 (a+bS/S' 0 ), obtained by incorporating both the above effects simultaneously, have also each been fitted to the same data. Good correlation with correlation coefficients around 0.9 or more are obtained for most of the locations with all the four equations. Substantial spatial scatter is obtained in the values of the regression parameters. The use of any of the three latter equations does not result in any advantage over that of the simpler Angstrom equation; it neither results in a decrease in the spatial scatter in the values of the regression parameters nor does it yield better correlation. The computed values of the regression parameters in the Angstrom equation yield estimates of the global irradiation that are on the average within +- 4% of the measured values for most of the locations. (author)

  17. Low-complexity DOA estimation from short data snapshots for ULA systems using the annihilating filter technique

    Science.gov (United States)

    Bellili, Faouzi; Amor, Souheib Ben; Affes, Sofiène; Ghrayeb, Ali

    2017-12-01

    This paper addresses the problem of DOA estimation using uniform linear array (ULA) antenna configurations. We propose a new low-cost method of multiple DOA estimation from very short data snapshots. The new estimator is based on the annihilating filter (AF) technique. It is non-data-aided (NDA) and does not impinge therefore on the whole throughput of the system. The noise components are assumed temporally and spatially white across the receiving antenna elements. The transmitted signals are also temporally and spatially white across the transmitting sources. The new method is compared in performance to the Cramér-Rao lower bound (CRLB), the root-MUSIC algorithm, the deterministic maximum likelihood estimator and another Bayesian method developed precisely for the single snapshot case. Simulations show that the new estimator performs well over a wide SNR range. Prominently, the main advantage of the new AF-based method is that it succeeds in accurately estimating the DOAs from short data snapshots and even from a single snapshot outperforming by far the state-of-the-art techniques both in DOA estimation accuracy and computational cost.

  18. Estimation of the Coefficient of Restitution of Rocking Systems by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Demosthenous, M.; Manos, G. C.

    The aim of this paper is to investigate the possibility of estimating an average damping parameter for a rocking system due to impact, the so-called coefficient of restitution, from the random response, i.e. when the loads are random and unknown, and the response is measured. The objective is to ...... of freedom system loaded by white noise, estimating the coefficient of restitution as explained, and comparing the estimates with the value used in the simulations. Several estimates for the coefficient of restitution are considered, and reasonable results are achieved....

  19. A novel technique for real-time estimation of edge pedestal density gradients via reflectometer time delay data

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.; Wang, G.; Sung, C.; Peebles, W. A. [Physics and Astronomy Department, University of California, Los Angeles, California 90095 (United States); Bobrek, M. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6006 (United States)

    2016-11-15

    A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layer density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.

  20. Estimation of the impact of manufacturing tolerances on burn-up calculations using Monte Carlo techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bock, M.; Wagner, M. [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH, Garching (Germany). Forschungszentrum

    2012-11-01

    In recent years, the availability of computing resources has increased enormously. There are two ways to take advantage of this increase in analyses in the field of the nuclear fuel cycle, such as burn-up calculations or criticality safety calculations. The first possible way is to improve the accuracy of the models that are analyzed. For burn-up calculations this means, that the goal to model and to calculate the burn-up of a full reactor core is getting more and more into reach. The second way to utilize the resources is to run state-of-the-art programs with simplified models several times, but with varied input parameters. This second way opens the applicability of the assessment of uncertainties and sensitivities based on the Monte Carlo method for fields of research that rely heavily on either high CPU usage or high memory consumption. In the context of the nuclear fuel cycle, applications that belong to these types of demanding analyses are again burn-up and criticality safety calculations. The assessment of uncertainties in burn-up analyses can complement traditional analysis techniques such as best estimate or bounding case analyses and can support the safety analysis in future design decisions, e.g. by analyzing the uncertainty of the decay heat power of the nuclear inventory stored in the spent fuel pool of a nuclear power plant. This contribution concentrates on the uncertainty analysis in burn-up calculations of PWR fuel assemblies. The uncertainties in the results arise from the variation of the input parameters. In this case, the focus is on the one hand on the variation of manufacturing tolerances that are present in the different production stages of the fuel assemblies. On the other hand, uncertainties that describe the conditions during the reactor operation are taken into account. They also affect the results of burn-up calculations. In order to perform uncertainty analyses in burn-up calculations, GRS has improved the capabilities of its general

  1. A Comparison of Regression Techniques for Estimation of Above-Ground Winter Wheat Biomass Using Near-Surface Spectroscopy

    Directory of Open Access Journals (Sweden)

    Jibo Yue

    2018-01-01

    Full Text Available Above-ground biomass (AGB provides a vital link between solar energy consumption and yield, so its correct estimation is crucial to accurately monitor crop growth and predict yield. In this work, we estimate AGB by using 54 vegetation indexes (e.g., Normalized Difference Vegetation Index, Soil-Adjusted Vegetation Index and eight statistical regression techniques: artificial neural network (ANN, multivariable linear regression (MLR, decision-tree regression (DT, boosted binary regression tree (BBRT, partial least squares regression (PLSR, random forest regression (RF, support vector machine regression (SVM, and principal component regression (PCR, which are used to analyze hyperspectral data acquired by using a field spectrophotometer. The vegetation indexes (VIs determined from the spectra were first used to train regression techniques for modeling and validation to select the best VI input, and then summed with white Gaussian noise to study how remote sensing errors affect the regression techniques. Next, the VIs were divided into groups of different sizes by using various sampling methods for modeling and validation to test the stability of the techniques. Finally, the AGB was estimated by using a leave-one-out cross validation with these powerful techniques. The results of the study demonstrate that, of the eight techniques investigated, PLSR and MLR perform best in terms of stability and are most suitable when high-accuracy and stable estimates are required from relatively few samples. In addition, RF is extremely robust against noise and is best suited to deal with repeated observations involving remote-sensing data (i.e., data affected by atmosphere, clouds, observation times, and/or sensor noise. Finally, the leave-one-out cross-validation method indicates that PLSR provides the highest accuracy (R2 = 0.89, RMSE = 1.20 t/ha, MAE = 0.90 t/ha, NRMSE = 0.07, CV (RMSE = 0.18; thus, PLSR is best suited for works requiring high

  2. Porosity and hydraulic conductivity estimation of the basaltic aquifer in Southern Syria by using nuclear and electrical well logging techniques

    Science.gov (United States)

    Asfahani, Jamal

    2017-08-01

    An alternative approach using nuclear neutron-porosity and electrical resistivity well logging of long (64 inch) and short (16 inch) normal techniques is proposed to estimate the porosity and the hydraulic conductivity ( K) of the basaltic aquifers in Southern Syria. This method is applied on the available logs of Kodana well in Southern Syria. It has been found that the obtained K value by applying this technique seems to be reasonable and comparable with the hydraulic conductivity value of 3.09 m/day obtained by the pumping test carried out at Kodana well. The proposed alternative well logging methodology seems as promising and could be practiced in the basaltic environments for the estimation of hydraulic conductivity parameter. However, more detailed researches are still required to make this proposed technique very performed in basaltic environments.

  3. Estimating Global Seafloor Total Organic Carbon Using a Machine Learning Technique and Its Relevance to Methane Hydrates

    Science.gov (United States)

    Lee, T. R.; Wood, W. T.; Dale, J.

    2017-12-01

    Empirical and theoretical models of sub-seafloor organic matter transformation, degradation and methanogenesis require estimates of initial seafloor total organic carbon (TOC). This subsurface methane, under the appropriate geophysical and geochemical conditions may manifest as methane hydrate deposits. Despite the importance of seafloor TOC, actual observations of TOC in the world's oceans are sparse and large regions of the seafloor yet remain unmeasured. To provide an estimate in areas where observations are limited or non-existent, we have implemented interpolation techniques that rely on existing data sets. Recent geospatial analyses have provided accurate accounts of global geophysical and geochemical properties (e.g. crustal heat flow, seafloor biomass, porosity) through machine learning interpolation techniques. These techniques find correlations between the desired quantity (in this case TOC) and other quantities (predictors, e.g. bathymetry, distance from coast, etc.) that are more widely known. Predictions (with uncertainties) of seafloor TOC in regions lacking direct observations are made based on the correlations. Global distribution of seafloor TOC at 1 x 1 arc-degree resolution was estimated from a dataset of seafloor TOC compiled by Seiter et al. [2004] and a non-parametric (i.e. data-driven) machine learning algorithm, specifically k-nearest neighbors (KNN). Built-in predictor selection and a ten-fold validation technique generated statistically optimal estimates of seafloor TOC and uncertainties. In addition, inexperience was estimated. Inexperience is effectively the distance in parameter space to the single nearest neighbor, and it indicates geographic locations where future data collection would most benefit prediction accuracy. These improved geospatial estimates of TOC in data deficient areas will provide new constraints on methane production and subsequent methane hydrate accumulation.

  4. Sensitivity analysis of a pulse nutrient addition technique for estimating nutrient uptake in large streams

    Science.gov (United States)

    Laurence Lin; J.R. Webster

    2012-01-01

    The constant nutrient addition technique has been used extensively to measure nutrient uptake in streams. However, this technique is impractical for large streams, and the pulse nutrient addition (PNA) has been suggested as an alternative. We developed a computer model to simulate Monod kinetics nutrient uptake in large rivers and used this model to evaluate the...

  5. Estimating forest attribute parameters for small areas using nearest neighbors techniques

    Science.gov (United States)

    Ronald E. McRoberts

    2012-01-01

    Nearest neighbors techniques have become extremely popular, particularly for use with forest inventory data. With these techniques, a population unit prediction is calculated as a linear combination of observations for a selected number of population units in a sample that are most similar, or nearest, in a space of ancillary variables to the population unit requiring...

  6. Application of optimal estimation techniques to FFTF decay heat removal analysis

    International Nuclear Information System (INIS)

    Nutt, W.T.; Additon, S.L.; Parziale, E.A.

    1979-01-01

    The verification and adjustment of plant models for decay heat removal analysis using a mix of engineering judgment and formal techniques from control theory are discussed. The formal techniques facilitate dealing with typical test data which are noisy, redundant and do not measure all of the plant model state variables directly. Two pretest examples are presented. 5 refs

  7. Comparison of three techniques for estimating the forage intake of lactating dairy cows on pasture.

    Science.gov (United States)

    Macoon, B; Sollenberger, L E; Moore, J E; Staples, C R; Fike, J H; Portier, K M

    2003-09-01

    Quantifying DMI is necessary for estimation of nutrient consumption by ruminants, but it is inherently difficult on grazed pastures and even more so when supplements are fed. Our objectives were to compare three methods of estimating forage DMI (inference from animal performance, evaluation from fecal output using a pulse-dose marker, and estimation from herbage disappearance methods) and to identify the most useful approach or combination of approaches for estimating pasture intake by lactating dairy cows. During three continuous 28-d periods in the winter season, Holstein cows (Bos taurus; n = 32) grazed a cool-season grass or a cool-season grass-clover mixture at two stocking rates (SR; 5 vs. 2.5 cows/ha) and were fed two rates of concentrate supplementation (CS; 1 kg of concentrate [as-fed] per 2.5 or 3.5 kg of milk produced). Animal response data used in computations for the animal performance method were obtained from the latter 14 d of each period. For the pulse-dose marker method, chromium-mordanted fiber was used. Pasture sampling to determine herbage disappearance was done weekly throughout the study. Forage DMI estimated by the animal performance method was different among periods (P forage mass. The pulse-dose marker method generally provided greater estimates of forage DMI (as much as 11.0 kg/d more than the animal performance method) and was not correlated with the other methods. Estimates of forage DMI by the herbage disappearance method were correlated with the animal performance method. The difference between estimates from these two methods, ranging from -4.7 to 5.4 kg/d, were much lower than their difference from pulse-dose marker estimates. The results of this study suggest that, when appropriate for the research objectives, the animal performance or herbage disappearance methods may be useful and less costly alternatives to using the pulse-dose method.

  8. Estimation of low level gross alpha activities in the radioactive effluent using liquid scintillation counting technique

    International Nuclear Information System (INIS)

    Bhade, Sonali P.D.; Johnson, Bella E.; Singh, Sanjay; Babu, D.A.R.

    2012-01-01

    A technique has been developed for simultaneous measurement of gross alpha and gross beta activity concentration in low level liquid effluent samples in presence of higher activity concentrations of tritium. For this purpose, alpha beta discriminating Pulse Shape Analysis Liquid Scintillation Counting (LSC) technique was used. Main advantages of this technique are easy sample preparation, rapid measurement and higher sensitivity. The calibration methodology for Quantulus1220 LSC based on PSA technique using 241 Am and 90 Sr/ 90 Y as alpha and beta standards respectively was described in detail. LSC technique was validated by measuring alpha and beta activity concentrations in test samples with known amount of 241 Am and 90 Sr/ 90 Y activities spiked in distilled water. The results obtained by LSC technique were compared with conventional planchet counting methods such as ZnS(Ag) and end window GM detectors. The gross alpha and gross beta activity concentrations in spiked samples, obtained by LSC technique were found to be within ±5% of the reference values. (author)

  9. Using fuzzy logic to improve the project time and cost estimation based on Project Evaluation and Review Technique (PERT

    Directory of Open Access Journals (Sweden)

    Farhad Habibi

    2018-09-01

    Full Text Available Among different factors, correct scheduling is one of the vital elements for project management success. There are several ways to schedule projects including the Critical Path Method (CPM and Program Evaluation and Review Technique (PERT. Due to problems in estimating dura-tions of activities, these methods cannot accurately and completely model actual projects. The use of fuzzy theory is a basic way to improve scheduling and deal with such problems. Fuzzy theory approximates project scheduling models to reality by taking into account uncertainties in decision parameters and expert experience and mental models. This paper provides a step-by-step approach for accurate estimation of time and cost of projects using the Project Evaluation and Review Technique (PERT and expert views as fuzzy numbers. The proposed method included several steps. In the first step, the necessary information for project time and cost is estimated using the Critical Path Method (CPM and the Project Evaluation and Review Technique (PERT. The second step considers the duration and cost of the project activities as the trapezoidal fuzzy numbers, and then, the time and cost of the project are recalculated. The duration and cost of activities are estimated using the questionnaires as well as weighing the expert opinions, averaging and defuzzification based on a step-by-step algorithm. The calculating procedures for evaluating these methods are applied in a real project; and the obtained results are explained.

  10. New horizontal global solar radiation estimation models for Turkey based on robust coplot supported genetic programming technique

    International Nuclear Information System (INIS)

    Demirhan, Haydar; Kayhan Atilgan, Yasemin

    2015-01-01

    Highlights: • Precise horizontal global solar radiation estimation models are proposed for Turkey. • Genetic programming technique is used to construct the models. • Robust coplot analysis is applied to reduce the impact of outlier observations. • Better estimation and prediction properties are observed for the models. - Abstract: Renewable energy sources have been attracting more and more attention of researchers due to the diminishing and harmful nature of fossil energy sources. Because of the importance of solar energy as a renewable energy source, an accurate determination of significant covariates and their relationships with the amount of global solar radiation reaching the Earth is a critical research problem. There are numerous meteorological and terrestrial covariates that can be used in the analysis of horizontal global solar radiation. Some of these covariates are highly correlated with each other. It is possible to find a large variety of linear or non-linear models to explain the amount of horizontal global solar radiation. However, models that explain the amount of global solar radiation with the smallest set of covariates should be obtained. In this study, use of the robust coplot technique to reduce the number of covariates before going forward with advanced modelling techniques is considered. After reducing the dimensionality of model space, yearly and monthly mean daily horizontal global solar radiation estimation models for Turkey are built by using the genetic programming technique. It is observed that application of robust coplot analysis is helpful for building precise models that explain the amount of global solar radiation with the minimum number of covariates without suffering from outlier observations and the multicollinearity problem. Consequently, over a dataset of Turkey, precise yearly and monthly mean daily global solar radiation estimation models are introduced using the model spaces obtained by robust coplot technique and

  11. Comparing the accuracy and precision of three techniques used for estimating missing landmarks when reconstructing fossil hominin crania.

    Science.gov (United States)

    Neeser, Rudolph; Ackermann, Rebecca Rogers; Gain, James

    2009-09-01

    Various methodological approaches have been used for reconstructing fossil hominin remains in order to increase sample sizes and to better understand morphological variation. Among these, morphometric quantitative techniques for reconstruction are increasingly common. Here we compare the accuracy of three approaches--mean substitution, thin plate splines, and multiple linear regression--for estimating missing landmarks of damaged fossil specimens. Comparisons are made varying the number of missing landmarks, sample sizes, and the reference species of the population used to perform the estimation. The testing is performed on landmark data from individuals of Homo sapiens, Pan troglodytes and Gorilla gorilla, and nine hominin fossil specimens. Results suggest that when a small, same-species fossil reference sample is available to guide reconstructions, thin plate spline approaches perform best. However, if no such sample is available (or if the species of the damaged individual is uncertain), estimates of missing morphology based on a single individual (or even a small sample) of close taxonomic affinity are less accurate than those based on a large sample of individuals drawn from more distantly related extant populations using a technique (such as a regression method) able to leverage the information (e.g., variation/covariation patterning) contained in this large sample. Thin plate splines also show an unexpectedly large amount of error in estimating landmarks, especially over large areas. Recommendations are made for estimating missing landmarks under various scenarios. Copyright 2009 Wiley-Liss, Inc.

  12. Tissue printing technique in nitrocelullose membranes: a rapid detection technique for estimating incidence of PVX, PVY, PVS and PLRV viruses infecting potato (Solanum spp.

    Directory of Open Access Journals (Sweden)

    Mónica Guzmán

    2002-07-01

    Full Text Available The ELISA serological technique has been used since the 1970s as a quantative technique for the detection of many groups of virus which infect plants. The immune-impression (IMI in nitrocelullose membrane qualitative technique has been implemented more recently for the detection of different viral groups. In this work, the IMI technique has been adapted for the detection of PVX, PVY PVS and PLRV viruses which attack different species and varieties of potato crop (Solanum spp., such as Egg yolk, Capiro, Morita, Pastusa, Monserrate, Tuquerreña, ICA Puracé and ICA Nariño, all from the Nariño department. The four viruses mentioned above can cause 30% and 60% losses in production, be they acting alone or synergistically. This means that they can easily reduce the economic benefits of a country like Colombia, characterised as being a great potato producer (i.e. more than 2.8 million tons per year. The IMI technique was compared with the ELISA technique (Enzymne Linked Immunosorbent Assay using the same samples, leading to confirmation of the test's sensitivity for detecting the virus. From a total of 800 samples analyzed by IMI from different areas in the Nariño department, 72% incidence for PVY, 38.7% for PVX, 85.6% for PVS and 91.1% for PLRV was found; these estimates were similar or greater than those obtained using ELISA. These results are new for Colombia in terms of imple-menting the easy and sensitive IMI technique for detecting these four viral groups infecting the potato as well as estimating their incidence in Nariño, one of Colombia's most important potato-producing departments. The opportune and flexible detection of virus leads to an effective response to eradicating contaminated material, both material in the field as well as that from in vitro culture. Results lead to it being suggested that implementing IMI could bringing wide benefits for potato seed certification programmes, as they maintain sensitivity and specificity, they

  13. Applying a particle filtering technique for canola crop growth stage estimation in Canada

    Science.gov (United States)

    Sinha, Abhijit; Tan, Weikai; Li, Yifeng; McNairn, Heather; Jiao, Xianfeng; Hosseini, Mehdi

    2017-10-01

    Accurate crop growth stage estimation is important in precision agriculture as it facilitates improved crop management, pest and disease mitigation and resource planning. Earth observation imagery, specifically Synthetic Aperture Radar (SAR) data, can provide field level growth estimates while covering regional scales. In this paper, RADARSAT-2 quad polarization and TerraSAR-X dual polarization SAR data and ground truth growth stage data are used to model the influence of canola growth stages on SAR imagery extracted parameters. The details of the growth stage modeling work are provided, including a) the development of a new crop growth stage indicator that is continuous and suitable as the state variable in the dynamic estimation procedure; b) a selection procedure for SAR polarimetric parameters that is sensitive to both linear and nonlinear dependency between variables; and c) procedures for compensation of SAR polarimetric parameters for different beam modes. The data was collected over three crop growth seasons in Manitoba, Canada, and the growth model provides the foundation of a novel dynamic filtering framework for real-time estimation of canola growth stages using the multi-sensor and multi-mode SAR data. A description of the dynamic filtering framework that uses particle filter as the estimator is also provided in this paper.

  14. A virtually blind spectrum efficient channel estimation technique for mimo-ofdm system

    International Nuclear Information System (INIS)

    Ullah, M.O.

    2015-01-01

    Multiple-Input Multiple-Output antennas in conjunction with Orthogonal Frequency-Division Multiplexing is a dominant air interface for 4G and 5G cellular communication systems. Additionally, MIMO- OFDM based air interface is the foundation for latest wireless Local Area Networks, wireless Personal Area Networks, and digital multimedia broadcasting. Whether it is a single antenna or a multi-antenna OFDM system, accurate channel estimation is required for coherent reception. Training-based channel estimation methods require multiple pilot symbols and therefore waste a significant portion of channel bandwidth. This paper describes a virtually blind spectrum efficient channel estimation scheme for MIMO-OFDM systems which operates well below the Nyquist criterion. (author)

  15. River suspended sediment estimation by climatic variables implication: Comparative study among soft computing techniques

    Science.gov (United States)

    Kisi, Ozgur; Shiri, Jalal

    2012-06-01

    Estimating sediment volume carried by a river is an important issue in water resources engineering. This paper compares the accuracy of three different soft computing methods, Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Gene Expression Programming (GEP), in estimating daily suspended sediment concentration on rivers by using hydro-meteorological data. The daily rainfall, streamflow and suspended sediment concentration data from Eel River near Dos Rios, at California, USA are used as a case study. The comparison results indicate that the GEP model performs better than the other models in daily suspended sediment concentration estimation for the particular data sets used in this study. Levenberg-Marquardt, conjugate gradient and gradient descent training algorithms were used for the ANN models. Out of three algorithms, the Conjugate gradient algorithm was found to be better than the others.

  16. In-vivo validation of fast spectral velocity estimation techniques – preliminary results

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Gran, Fredrik; Pedersen, Mads Møller

    2008-01-01

    Spectral Doppler is a common way to estimate blood velocities in medical ultrasound (US). The standard way of estimating spectrograms is by using Welch's method (WM). WM is dependent on a long observation window (OW) (about 100 transmissions) to produce spectrograms with sufficient spectral...... resolution and contrast. Two adaptive filterbank methods have been suggested to circumvent this problem: the Blood spectral Power Capon method (BPC) and the Blood Amplitude and Phase Estimation method (BAPES). Previously, simulations and flow rig experiments have indicated that the two adaptive methods can...... was scanned using the experimental ultrasound scanner RASMUS and a B-K Medical 5 MHz linear array transducer with an angle of insonation not exceeding 60deg. All 280 spectrograms were then randomised and presented to a radiologist blinded for method and OW for visual evaluation: useful or not useful. WMbw...

  17. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Melius, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ong, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.

  18. Depth-of-interaction estimates in pixelated scintillator sensors using Monte Carlo techniques

    International Nuclear Information System (INIS)

    Sharma, Diksha; Sze, Christina; Bhandari, Harish; Nagarkar, Vivek; Badano, Aldo

    2017-01-01

    Image quality in thick scintillator detectors can be improved by minimizing parallax errors through depth-of-interaction (DOI) estimation. A novel sensor for low-energy single photon imaging having a thick, transparent, crystalline pixelated micro-columnar CsI:Tl scintillator structure has been described, with possible future application in small-animal single photon emission computed tomography (SPECT) imaging when using thicker structures under development. In order to understand the fundamental limits of this new structure, we introduce cartesianDETECT2, an open-source optical transport package that uses Monte Carlo methods to obtain estimates of DOI for improving spatial resolution of nuclear imaging applications. Optical photon paths are calculated as a function of varying simulation parameters such as columnar surface roughness, bulk, and top-surface absorption. We use scanning electron microscope images to estimate appropriate surface roughness coefficients. Simulation results are analyzed to model and establish patterns between DOI and photon scattering. The effect of varying starting locations of optical photons on the spatial response is studied. Bulk and top-surface absorption fractions were varied to investigate their effect on spatial response as a function of DOI. We investigated the accuracy of our DOI estimation model for a particular screen with various training and testing sets, and for all cases the percent error between the estimated and actual DOI over the majority of the detector thickness was ±5% with a maximum error of up to ±10% at deeper DOIs. In addition, we found that cartesianDETECT2 is computationally five times more efficient than MANTIS. Findings indicate that DOI estimates can be extracted from a double-Gaussian model of the detector response. We observed that our model predicts DOI in pixelated scintillator detectors reasonably well.

  19. Estimating primary productivity of tropical oil palm in Malaysia using remote sensing technique and ancillary data

    Science.gov (United States)

    Kanniah, K. D.; Tan, K. P.; Cracknell, A. P.

    2014-10-01

    The amount of carbon sequestration by vegetation can be estimated using vegetation productivity. At present, there is a knowledge gap in oil palm net primary productivity (NPP) at a regional scale. Therefore, in this study NPP of oil palm trees in Peninsular Malaysia was estimated using remote sensing based light use efficiency (LUE) model with inputs from local meteorological data, upscaled leaf area index/fractional photosynthetically active radiation (LAI/fPAR) derived using UK-DMC 2 satellite data and a constant maximum LUE value from the literature. NPP values estimated from the model was then compared and validated with NPP estimated using allometric equations developed by Corley and Tinker (2003), Henson (2003) and Syahrinudin (2005) with diameter at breast height, age and the height of the oil palm trees collected from three estates in Peninsular Malaysia. Results of this study show that oil palm NPP derived using a light use efficiency model increases with respect to the age of oil palm trees, and it stabilises after ten years old. The mean value of oil palm NPP at 118 plots as derived using the LUE model is 968.72 g C m-2 year-1 and this is 188% - 273% higher than the NPP derived from the allometric equations. The estimated oil palm NPP of young oil palm trees is lower compared to mature oil palm trees (oil palm trees contribute to lower oil palm LAI and therefore fPAR, which is an important variable in the LUE model. In contrast, it is noted that oil palm NPP decreases with respect to the age of oil palm trees as estimated using the allomeric equations. It was found in this study that LUE models could not capture NPP variation of oil palm trees if LAI/fPAR is used. On the other hand, tree height and DBH are found to be important variables that can capture changes in oil palm NPP as a function of age.

  20. Variance estimation for complex indicators of poverty and inequality using linearization techniques

    Directory of Open Access Journals (Sweden)

    Guillaume Osier

    2009-12-01

    Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.

  1. A MATLAB program for estimation of unsaturated hydraulic soil parameters using an infiltrometer technique

    DEFF Research Database (Denmark)

    Mollerup, Mikkel; Hansen, Søren; Petersen, Carsten

    2008-01-01

    We combined an inverse routine for assessing the hydraulic soil parameters of the Campbell/Mualem model with the power series solution developed by Philip for describing one-dimensional vertical infiltration into a homogenous soil. We based the estimation routine on a proposed measurement procedure....... An independent measurement of the soil water content at saturation may reduce the uncertainty of estimated parameters. Response surfaces of the objective function were analysed. Scenarios for various soils and conditions, using numerically generated synthetic cumulative infiltration data with normally...

  2. THE IMPROVEMENT OF ESTIMATION TECHNIQUE FOR EFFECTIVENESS OF INVESTMENT PROJECTS ON WASTE UTILIZATION

    Directory of Open Access Journals (Sweden)

    V.V. Krivorotov

    2008-06-01

    Full Text Available The main tendencies of the waste products formation and recycling in the Russian Federation and in the Sverdlovsk region have been analyzed and the principal factors restraining the inclusion of anthropogenic formations into the economic circulation have been revealed in the work. A technical approach to the estimation of both ecological and economic integral efficiency of the recycling projects that, in autors, opinion, secures higher objectivity of this estimation as well as the validity of the made decisions on their realization.

  3. Estimating the burden of pneumococcal pneumonia among adults: a systematic review and meta-analysis of diagnostic techniques.

    Directory of Open Access Journals (Sweden)

    Maria A Said

    Full Text Available Pneumococcal pneumonia causes significant morbidity and mortality among adults. Given limitations of diagnostic tests for non-bacteremic pneumococcal pneumonia, most studies report the incidence of bacteremic or invasive pneumococcal disease (IPD, and thus, grossly underestimate the pneumococcal pneumonia burden. We aimed to develop a conceptual and quantitative strategy to estimate the non-bacteremic disease burden among adults with community-acquired pneumonia (CAP using systematic study methods and the availability of a urine antigen assay.We performed a systematic literature review of studies providing information on the relative yield of various diagnostic assays (BinaxNOW® S. pneumoniae urine antigen test (UAT with blood and/or sputum culture in diagnosing pneumococcal pneumonia. We estimated the proportion of pneumococcal pneumonia that is bacteremic, the proportion of CAP attributable to pneumococcus, and the additional contribution of the Binax UAT beyond conventional diagnostic techniques, using random effects meta-analytic methods and bootstrapping. We included 35 studies in the analysis, predominantly from developed countries. The estimated proportion of pneumococcal pneumonia that is bacteremic was 24.8% (95% CI: 21.3%, 28.9%. The estimated proportion of CAP attributable to pneumococcus was 27.3% (95% CI: 23.9%, 31.1%. The Binax UAT diagnosed an additional 11.4% (95% CI: 9.6, 13.6% of CAP beyond conventional techniques. We were limited by the fact that not all patients underwent all diagnostic tests and by the sensitivity and specificity of the diagnostic tests themselves. We address these resulting biases and provide a range of plausible values in order to estimate the burden of pneumococcal pneumonia among adults.Estimating the adult burden of pneumococcal disease from bacteremic pneumococcal pneumonia data alone significantly underestimates the true burden of disease in adults. For every case of bacteremic pneumococcal pneumonia

  4. Blur kernel estimation with algebraic tomography technique and intensity profiles of object boundaries

    Science.gov (United States)

    Ingacheva, Anastasia; Chukalina, Marina; Khanipov, Timur; Nikolaev, Dmitry

    2018-04-01

    Motion blur caused by camera vibration is a common source of degradation in photographs. In this paper we study the problem of finding the point spread function (PSF) of a blurred image using the tomography technique. The PSF reconstruction result strongly depends on the particular tomography technique used. We present a tomography algorithm with regularization adapted specifically for this task. We use the algebraic reconstruction technique (ART algorithm) as the starting algorithm and introduce regularization. We use the conjugate gradient method for numerical implementation of the proposed approach. The algorithm is tested using a dataset which contains 9 kernels extracted from real photographs by the Adobe corporation where the point spread function is known. We also investigate influence of noise on the quality of image reconstruction and investigate how the number of projections influence the magnitude change of the reconstruction error.

  5. Estimation of fracture parameters in foam core materials using thermal techniques

    DEFF Research Database (Denmark)

    Dulieu-Barton, J. M.; Berggreen, Christian; Boyenval Langlois, C.

    2010-01-01

    is described. A mode I simulated crack in the form of a machined notch is used to establish the feasibility of the TSA approach to derive stress intensity factors for the foam material. The overall goal is to demonstrate that thermal techniques have the ability to provide deeper insight into the behaviour......The paper presents some initial work on establishing the stress state at a crack tip in PVC foam material using a non-contact infra-red technique known as thermoelastic stress analysis (TSA). A parametric study of the factors that may affect the thermoelastic response of the foam material...

  6. Using Quantitative Data Analysis Techniques for Bankruptcy Risk Estimation for Corporations

    Directory of Open Access Journals (Sweden)

    Ştefan Daniel ARMEANU

    2012-01-01

    Full Text Available Diversification of methods and techniques for quantification and management of risk has led to the development of many mathematical models, a large part of which focused on measuring bankruptcy risk for businesses. In financial analysis there are many indicators which can be used to assess the risk of bankruptcy of enterprises but to make an assessment it is needed to reduce the number of indicators and this can be achieved through principal component, cluster and discriminant analyses techniques. In this context, the article aims to build a scoring function used to identify bankrupt companies, using a sample of companies listed on Bucharest Stock Exchange.

  7. Comparison of different techniques for in microgravity-a simple mathematic estimation of cardiopulmonary resuscitation quality for space environment.

    Science.gov (United States)

    Braunecker, S; Douglas, B; Hinkelbein, J

    2015-07-01

    Since astronauts are selected carefully, are usually young, and are intensively observed before and during training, relevant medical problems are rare. Nevertheless, there is a certain risk for a cardiac arrest in space requiring cardiopulmonary resuscitation (CPR). Up to now, there are 5 known techniques to perform CPR in microgravity. The aim of the present study was to analyze different techniques for CPR during microgravity about quality of CPR. To identify relevant publications on CPR quality in microgravity, a systematic analysis with defined searching criteria was performed in the PubMed database (http://www.pubmed.com). For analysis, the keywords ("reanimation" or "CPR" or "resuscitation") and ("space" or "microgravity" or "weightlessness") and the specific names of the techniques ("Standard-technique" or "Straddling-manoeuvre" or "Reverse-bear-hug-technique" or "Evetts-Russomano-technique" or "Hand-stand-technique") were used. To compare quality and effectiveness of different techniques, we used the compression product (CP), a mathematical estimation for cardiac output. Using the predefined keywords for literature search, 4 different publications were identified (parabolic flight or under simulated conditions on earth) dealing with CPR efforts in microgravity and giving specific numbers. No study was performed under real-space conditions. Regarding compression depth, the handstand (HS) technique as well as the reverse bear hug (RBH) technique met parameters of the guidelines for CPR in 1G environments best (HS ratio, 0.91 ± 0.07; RBH ratio, 0.82 ± 0.13). Concerning compression rate, 4 of 5 techniques reached the required compression rate (ratio: HS, 1.08 ± 0.11; Evetts-Russomano [ER], 1.01 ± 0.06; standard side straddle, 1.00 ± 0.03; and straddling maneuver, 1.03 ± 0.12). The RBH method did not meet the required criteria (0.89 ± 0.09). The HS method showed the highest cardiac output (69.3% above the required CP), followed by the ER technique (33

  8. Early‐Stage Capital Cost Estimation of Biorefinery Processes: A Comparative Study of Heuristic Techniques

    Science.gov (United States)

    Couturier, Jean‐Luc; Kokossis, Antonis; Dubois, Jean‐Luc

    2016-01-01

    Abstract Biorefineries offer a promising alternative to fossil‐based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital‐intensive projects that involve state‐of‐the‐art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well‐documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early‐stage capital cost estimation tool suitable for biorefinery processes. PMID:27484398

  9. Properties of parameter estimation techniques for a beta-binomial failure model. Final technical report

    International Nuclear Information System (INIS)

    Shultis, J.K.; Buranapan, W.; Eckhoff, N.D.

    1981-12-01

    Of considerable importance in the safety analysis of nuclear power plants are methods to estimate the probability of failure-on-demand, p, of a plant component that normally is inactive and that may fail when activated or stressed. Properties of five methods for estimating from failure-on-demand data the parameters of the beta prior distribution in a compound beta-binomial probability model are examined. Simulated failure data generated from a known beta-binomial marginal distribution are used to estimate values of the beta parameters by (1) matching moments of the prior distribution to those of the data, (2) the maximum likelihood method based on the prior distribution, (3) a weighted marginal matching moments method, (4) an unweighted marginal matching moments method, and (5) the maximum likelihood method based on the marginal distribution. For small sample sizes (N = or < 10) with data typical of low failure probability components, it was found that the simple prior matching moments method is often superior (e.g. smallest bias and mean squared error) while for larger sample sizes the marginal maximum likelihood estimators appear to be best

  10. The Optical Fractionator Technique to Estimate Cell Numbers in a Rat Model of Electroconvulsive Therapy

    DEFF Research Database (Denmark)

    Olesen, Mikkel Vestergaard; Needham, Esther Kjær; Pakkenberg, Bente

    2017-01-01

    present the optical fractionator in conjunction with BrdU immunohistochemistry to estimate the production and survival of newly-formed neurons in the granule cell layer (including the sub-granular zone) of the rat hippocampus following electroconvulsive stimulation, which is among the most potent...

  11. Estimating changes in urban land and urban population using refined areal interpolation techniques

    Science.gov (United States)

    Zoraghein, Hamidreza; Leyk, Stefan

    2018-05-01

    The analysis of changes in urban land and population is important because the majority of future population growth will take place in urban areas. U.S. Census historically classifies urban land using population density and various land-use criteria. This study analyzes the reliability of census-defined urban lands for delineating the spatial distribution of urban population and estimating its changes over time. To overcome the problem of incompatible enumeration units between censuses, regular areal interpolation methods including Areal Weighting (AW) and Target Density Weighting (TDW), with and without spatial refinement, are implemented. The goal in this study is to estimate urban population in Massachusetts in 1990 and 2000 (source zones), within tract boundaries of the 2010 census (target zones), respectively, to create a consistent time series of comparable urban population estimates from 1990 to 2010. Spatial refinement is done using ancillary variables such as census-defined urban areas, the National Land Cover Database (NLCD) and the Global Human Settlement Layer (GHSL) as well as different combinations of them. The study results suggest that census-defined urban areas alone are not necessarily the most meaningful delineation of urban land. Instead, it appears that alternative combinations of the above-mentioned ancillary variables can better depict the spatial distribution of urban land, and thus make it possible to reduce the estimation error in transferring the urban population from source zones to target zones when running spatially-refined temporal areal interpolation.

  12. A triple isotope technique for estimation of fat and vitamin B12 malabsorption in Chrohn's disease

    International Nuclear Information System (INIS)

    Pedersen, N.T.; Rannem, T.

    1991-01-01

    A test for simultaneous estimation of vitamin B 12 and fat absorption from stool samples was investigated in 25 patients with severe diarrhoea after operation for Chrohn's disease. 51 CrCl 3 was ingested as a non-absorbable marker, 58 Co-cyanocobalamin as vitamin B 12 tracer, and 14 C-triolein as lipid tracer. Faeces were collected separately for three days. Some stool-to-stool variation in the 58 Co/ 51 Cr and 14 C/ 51 Cr ratios was seen. When the 58 Co-B 12 and 14 C-triolein excretion was estimated in samples of the two stools with the highest activities of 51 Cr, the variations of the estimates were less than ±10% and ±15% of the doses ingested, respectively. 12 of the 25 patients were not able to collect faeces and urine quantitatively and separately. However, in all patients faeces with sufficient radioactivity for simultaneous estimation of faecal 58 Co-B 12 and 14 C-triolein excretion from stool samples were obtained. 16 refs., 3 figs., 1 tab

  13. Evaluating the microscopic fecal technique for estimating hard mast in turkey diets

    Science.gov (United States)

    Mark A. Rumble; Stanley H. Anderson

    1993-01-01

    Wild and domestic dark turkeys (Meleagris gallopavo) were fed experimental diets containing acorn (Quercus gambelli), ponderosa pine (Pinus ponderosa) seed, grasses, forbs, and arthropods. In fecal estimates of diet composition, acorn and ponderosa pine seed were underestimated and grass was overestimated....

  14. A variational technique to estimate snowfall rate from coincident radar, snowflake, and fall-speed observations

    Science.gov (United States)

    Cooper, Steven J.; Wood, Norman B.; L'Ecuyer, Tristan S.

    2017-07-01

    Estimates of snowfall rate as derived from radar reflectivities alone are non-unique. Different combinations of snowflake microphysical properties and particle fall speeds can conspire to produce nearly identical snowfall rates for given radar reflectivity signatures. Such ambiguities can result in retrieval uncertainties on the order of 100-200 % for individual events. Here, we use observations of particle size distribution (PSD), fall speed, and snowflake habit from the Multi-Angle Snowflake Camera (MASC) to constrain estimates of snowfall derived from Ka-band ARM zenith radar (KAZR) measurements at the Atmospheric Radiation Measurement (ARM) North Slope Alaska (NSA) Climate Research Facility site at Barrow. MASC measurements of microphysical properties with uncertainties are introduced into a modified form of the optimal-estimation CloudSat snowfall algorithm (2C-SNOW-PROFILE) via the a priori guess and variance terms. Use of the MASC fall speed, MASC PSD, and CloudSat snow particle model as base assumptions resulted in retrieved total accumulations with a -18 % difference relative to nearby National Weather Service (NWS) observations over five snow events. The average error was 36 % for the individual events. Use of different but reasonable combinations of retrieval assumptions resulted in estimated snowfall accumulations with differences ranging from -64 to +122 % for the same storm events. Retrieved snowfall rates were particularly sensitive to assumed fall speed and habit, suggesting that in situ measurements can help to constrain key snowfall retrieval uncertainties. More accurate knowledge of these properties dependent upon location and meteorological conditions should help refine and improve ground- and space-based radar estimates of snowfall.

  15. A spatial compression technique for head-related transfer function interpolation and complexity estimation

    DEFF Research Database (Denmark)

    Shekarchi, Sayedali; Christensen-Dalsgaard, Jakob; Hallam, John

    2015-01-01

    A head-related transfer function (HRTF) model employing Legendre polynomials (LPs) is evaluated as an HRTF spatial complexity indicator and interpolation technique in the azimuth plane. LPs are a set of orthogonal functions derived on the sphere which can be used to compress an HRTF dataset...

  16. Propensity Score Estimation with Data Mining Techniques: Alternatives to Logistic Regression

    Science.gov (United States)

    Keller, Bryan S. B.; Kim, Jee-Seon; Steiner, Peter M.

    2013-01-01

    Propensity score analysis (PSA) is a methodological technique which may correct for selection bias in a quasi-experiment by modeling the selection process using observed covariates. Because logistic regression is well understood by researchers in a variety of fields and easy to implement in a number of popular software packages, it has…

  17. Estimating bridge stiffness using a forced-vibration technique for timber bridge health monitoring

    Science.gov (United States)

    James P. Wacker; Xiping Wang; Brian Brashaw; Robert J. Ross

    2006-01-01

    This paper describes an effort to refine a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the frequency response of several simple-span, sawn timber beam (with plank deck) bridges located in St. Louis County, Minnesota. Static load deflections were also measured to...

  18. Ground Receiving Station Reference Pair Selection Technique for a Minimum Configuration 3D Emitter Position Estimation Multilateration System

    Directory of Open Access Journals (Sweden)

    Abdulmalik Shehu Yaro

    2017-01-01

    Full Text Available Multilateration estimates aircraft position using the Time Difference Of Arrival (TDOA with a lateration algorithm. The Position Estimation (PE accuracy of the lateration algorithm depends on several factors which are the TDOA estimation error, the lateration algorithm approach, the number of deployed GRSs and the selection of the GRS reference used for the PE process. Using the minimum number of GRSs for 3D emitter PE, a technique based on the condition number calculation is proposed to select the suitable GRS reference pair for improving the accuracy of the PE using the lateration algorithm. Validation of the proposed technique was performed with the GRSs in the square and triangular GRS configuration. For the selected emitter positions, the result shows that the proposed technique can be used to select the suitable GRS reference pair for the PE process. A unity condition number is achieved for GRS pair most suitable for the PE process. Monte Carlo simulation result, in comparison with the fixed GRS reference pair lateration algorithm, shows a reduction in PE error of at least 70% for both GRS in the square and triangular configuration.

  19. Fatigue life estimation of a 1D aluminum beam under mode-I loading using the electromechanical impedance technique

    International Nuclear Information System (INIS)

    Lim, Yee Yan; Soh, Chee Kiong

    2011-01-01

    Structures in service are often subjected to fatigue loads. Cracks would develop and lead to failure if left unnoticed after a large number of cyclic loadings. Monitoring the process of fatigue crack propagation as well as estimating the remaining useful life of a structure is thus essential to prevent catastrophe while minimizing earlier-than-required replacement. The advent of smart materials such as piezo-impedance transducers (lead zirconate titanate, PZT) has ushered in a new era of structural health monitoring (SHM) based on non-destructive evaluation (NDE). This paper presents a series of investigative studies to evaluate the feasibility of fatigue crack monitoring and estimation of remaining useful life using the electromechanical impedance (EMI) technique employing a PZT transducer. Experimental tests were conducted to study the ability of the EMI technique in monitoring fatigue crack in 1D lab-sized aluminum beams. The experimental results prove that the EMI technique is very sensitive to fatigue crack propagation. A proof-of-concept semi-analytical damage model for fatigue life estimation has been developed by incorporating the linear elastic fracture mechanics (LEFM) theory into the finite element (FE) model. The prediction of the model matches closely with the experiment, suggesting the possibility of replacing costly experiments in future

  20. Fatigue life estimation of a 1D aluminum beam under mode-I loading using the electromechanical impedance technique

    Science.gov (United States)

    Lim, Yee Yan; Kiong Soh, Chee

    2011-12-01

    Structures in service are often subjected to fatigue loads. Cracks would develop and lead to failure if left unnoticed after a large number of cyclic loadings. Monitoring the process of fatigue crack propagation as well as estimating the remaining useful life of a structure is thus essential to prevent catastrophe while minimizing earlier-than-required replacement. The advent of smart materials such as piezo-impedance transducers (lead zirconate titanate, PZT) has ushered in a new era of structural health monitoring (SHM) based on non-destructive evaluation (NDE). This paper presents a series of investigative studies to evaluate the feasibility of fatigue crack monitoring and estimation of remaining useful life using the electromechanical impedance (EMI) technique employing a PZT transducer. Experimental tests were conducted to study the ability of the EMI technique in monitoring fatigue crack in 1D lab-sized aluminum beams. The experimental results prove that the EMI technique is very sensitive to fatigue crack propagation. A proof-of-concept semi-analytical damage model for fatigue life estimation has been developed by incorporating the linear elastic fracture mechanics (LEFM) theory into the finite element (FE) model. The prediction of the model matches closely with the experiment, suggesting the possibility of replacing costly experiments in future.

  1. A Robust Parametric Technique for Multipath Channel Estimation in the Uplink of a DS-CDMA System

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available The problem of estimating the multipath channel parameters of a new user entering the uplink of an asynchronous direct sequence-code division multiple access (DS-CDMA system is addressed. The problem is described via a least squares (LS cost function with a rich structure. This cost function, which is nonlinear with respect to the time delays and linear with respect to the gains of the multipath channel, is proved to be approximately decoupled in terms of the path delays. Due to this structure, an iterative procedure of 1D searches is adequate for time delays estimation. The resulting method is computationally efficient, does not require any specific pilot signal, and performs well for a small number of training symbols. Simulation results show that the proposed technique offers a better estimation accuracy compared to existing related methods, and is robust to multiple access interference.

  2. Estimation of leaf area index using ground-based remote sensed NDVI measurements: validation and comparison with two indirect techniques

    International Nuclear Information System (INIS)

    Pontailler, J.-Y.; Hymus, G.J.; Drake, B.G.

    2003-01-01

    This study took place in an evergreen scrub oak ecosystem in Florida. Vegetation reflectance was measured in situ with a laboratory-made sensor in the red (640-665 nm) and near-infrared (750-950 nm) bands to calculate the normalized difference vegetation index (NDVI) and derive the leaf area index (LAI). LAI estimates from this technique were compared with two other nondestructive techniques, intercepted photosynthetically active radiation (PAR) and hemispherical photographs, in four contrasting 4 m 2 plots in February 2000 and two 4m 2 plots in June 2000. We used Beer's law to derive LAI from PAR interception and gap fraction distribution to derive LAI from photographs. The plots were harvested manually after the measurements to determine a 'true' LAI value and to calculate a light extinction coefficient (k). The technique based on Beer's law was affected by a large variation of the extinction coefficient, owing to the larger impact of branches in winter when LAI was low. Hemispherical photographs provided satisfactory estimates, slightly overestimated in winter because of the impact of branches or underestimated in summer because of foliage clumping. NDVI provided the best fit, showing only saturation in the densest plot (LAI = 3.5). We conclude that in situ measurement of NDVI is an accurate and simple technique to nondestructively assess LAI in experimental plots or in crops if saturation remains acceptable. (author)

  3. Estimation of leaf area index using ground-based remote sensed NDVI measurements: validation and comparison with two indirect techniques

    Energy Technology Data Exchange (ETDEWEB)

    Pontailler, J.-Y. [Univ. Paris-Sud XI, Dept. d' Ecophysiologie Vegetale, Orsay Cedex (France); Hymus, G.J.; Drake, B.G. [Smithsonian Environmental Research Center, Kennedy Space Center, Florida (United States)

    2003-06-01

    This study took place in an evergreen scrub oak ecosystem in Florida. Vegetation reflectance was measured in situ with a laboratory-made sensor in the red (640-665 nm) and near-infrared (750-950 nm) bands to calculate the normalized difference vegetation index (NDVI) and derive the leaf area index (LAI). LAI estimates from this technique were compared with two other nondestructive techniques, intercepted photosynthetically active radiation (PAR) and hemispherical photographs, in four contrasting 4 m{sup 2} plots in February 2000 and two 4m{sup 2} plots in June 2000. We used Beer's law to derive LAI from PAR interception and gap fraction distribution to derive LAI from photographs. The plots were harvested manually after the measurements to determine a 'true' LAI value and to calculate a light extinction coefficient (k). The technique based on Beer's law was affected by a large variation of the extinction coefficient, owing to the larger impact of branches in winter when LAI was low. Hemispherical photographs provided satisfactory estimates, slightly overestimated in winter because of the impact of branches or underestimated in summer because of foliage clumping. NDVI provided the best fit, showing only saturation in the densest plot (LAI = 3.5). We conclude that in situ measurement of NDVI is an accurate and simple technique to nondestructively assess LAI in experimental plots or in crops if saturation remains acceptable. (author)

  4. Capacity Estimation and Near-Capacity Achieving Techniques for Digitally Modulated Communication Systems

    DEFF Research Database (Denmark)

    Yankov, Metodi Plamenov

    investigation will include linear interference channels of high dimensionality (such as multiple-input multiple-output), and the non-linear optical fiber channel, which has been gathering more and more attention from the information theory community in recent years. In both cases novel CCC estimates and lower......This thesis studies potential improvements that can be made to the current data rates of digital communication systems. The physical layer of the system will be investigated in band-limited scenarios, where high spectral efficiency is necessary in order to meet the ever-growing data rate demand....... Several issues are tackled, both with theoretical and more practical aspects. The theoretical part is mainly concerned with estimating the constellation constrained capacity (CCC) of channels with discrete input, which is an inherent property of digital communication systems. The channels under...

  5. Approaching bathymetry estimation from high resolution multispectral satellite images using a neuro-fuzzy technique

    Science.gov (United States)

    Corucci, Linda; Masini, Andrea; Cococcioni, Marco

    2011-01-01

    This paper addresses bathymetry estimation from high resolution multispectral satellite images by proposing an accurate supervised method, based on a neuro-fuzzy approach. The method is applied to two Quickbird images of the same area, acquired in different years and meteorological conditions, and is validated using truth data. Performance is studied in different realistic situations of in situ data availability. The method allows to achieve a mean standard deviation of 36.7 cm for estimated water depths in the range [-18, -1] m. When only data collected along a closed path are used as a training set, a mean STD of 45 cm is obtained. The effect of both meteorological conditions and training set size reduction on the overall performance is also investigated.

  6. Estimated sedimentation rate by radionuclide techniques at Lam Phra Phloeng dam, Northeastern of Thailand

    International Nuclear Information System (INIS)

    Sasimonton Moungsrijun; Kanitha Srisuksawad; Kosit Lorsirirat; Tuangrak Nantawisarakul

    2009-01-01

    The Lam Phra Phloeng dam is located in Nakhon Ratchasima province, northeastern of Thailand. Since it was constructed in 1963, the dam is under severe reduction of its water storage capacity caused by deforestation to agricultural land at the upper catchment. Sediment cores were collected using a gravity corer. Sedimentation rates were estimated from the vertical distribution of unsupported Pb-210 in sediment cores. Total Pb-210 was determined by measuring Po-210 activities. The Po-210 and Ra-226 activities were used to determine the rate of sediment by using alpha and gamma spectrometry. The sedimentation rate was estimated using the Constant Initial Concentration model (CIC), the sedimentation rate crest dam 0.265 gcm -2 y -1 and the upstream 0.213 gcm -2 y -1 (Author)

  7. A new technique for testing distribution of knowledge and to estimate sampling sufficiency in ethnobiology studies.

    Science.gov (United States)

    Araújo, Thiago Antonio Sousa; Almeida, Alyson Luiz Santos; Melo, Joabe Gomes; Medeiros, Maria Franco Trindade; Ramos, Marcelo Alves; Silva, Rafael Ricardo Vasconcelos; Almeida, Cecília Fátima Castelo Branco Rangel; Albuquerque, Ulysses Paulino

    2012-03-15

    We propose a new quantitative measure that enables the researcher to make decisions and test hypotheses about the distribution of knowledge in a community and estimate the richness and sharing of information among informants. In our study, this measure has two levels of analysis: intracultural and intrafamily. Using data collected in northeastern Brazil, we evaluated how these new estimators of richness and sharing behave for different categories of use. We observed trends in the distribution of the characteristics of informants. We were also able to evaluate how outliers interfere with these analyses and how other analyses may be conducted using these indices, such as determining the distance between the knowledge of a community and that of experts, as well as exhibiting the importance of these individuals' communal information of biological resources. One of the primary applications of these indices is to supply the researcher with an objective tool to evaluate the scope and behavior of the collected data.

  8. Application of Ambient Analysis Techniques for the Estimation of Electromechanical Oscillations from Measured PMU Data in Four Different Power Systems

    DEFF Research Database (Denmark)

    Vanfretti, Luigi; Dosiek, Luke; Pierre, John W.

    2011-01-01

    The application of advanced signal processing techniques to power system measurement data for the estimation of dynamic properties has been a research subject for over two decades. Several techniques have been applied to transient (or ringdown) data, ambient data, and to probing data. Some...... of these methodologies have been included in off-line analysis software, and are now being incorporated into software tools used in control rooms for monitoring the near real-time behavior of power system dynamics. In this paper we illustrate the practical application of some ambient analysis methods...... and planners as they provide information of the applicability of these techniques via readily available signal processing tools, and in addition, it is shown how to critically analyze the results obtained with these methods....

  9. Weight estimates and packaging techniques for the microwave radiometer spacecraft. [shuttle compatible design

    Science.gov (United States)

    Jensen, J. K.; Wright, R. L.

    1981-01-01

    Estimates of total spacecraft weight and packaging options were made for three conceptual designs of a microwave radiometer spacecraft. Erectable structures were found to be slightly lighter than deployable structures but could be packaged in one-tenth the volume. The tension rim concept, an unconventional design approach, was found to be the lightest and transportable to orbit in the least number of shuttle flights.

  10. Improvement of Bragg peak shift estimation using dimensionality reduction techniques and predictive linear modeling

    Science.gov (United States)

    Xing, Yafei; Macq, Benoit

    2017-11-01

    With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63+/-19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 +/- 0.22 mm and 1.84 +/- 8.98 mm for the learning model, while for PS method it ranged between 0.3 +/- 0.25 mm and 20.71 +/- 8.38 mm.

  11. Parameter estimation in astronomy through application of the likelihood ratio. [satellite data analysis techniques

    Science.gov (United States)

    Cash, W.

    1979-01-01

    Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.

  12. Data-driven Techniques to Estimate Parameters in the Homogenized Energy Model for Shape Memory Alloys

    Science.gov (United States)

    2011-11-01

    Both cases are compared to experimental data at various temperatures, and the optimized model parameters are compared to the initial estimates. 1...applications. The super-elastic effect has been utilized in orthodontic wires, eye-glass frames, stents, and annuloplasty bands [23]. Applications using...should be addressed. E-mail:jhcrews@ncsu.edu 1 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of

  13. Application on technique of joint time-frequency analysis of seismic signal's first arrival estimation

    International Nuclear Information System (INIS)

    Xu Chaoyang; Liu Junmin; Fan Yanfang; Ji Guohua

    2008-01-01

    Joint time-frequency analysis is conducted to construct one joint density function of time and frequency. It can open out one signal's frequency components and their evolvements. It is the new evolvement of Fourier analysis. In this paper, according to the characteristic of seismic signal's noise, one estimation method of seismic signal's first arrival based on triple correlation of joint time-frequency spectrum is introduced, and the results of experiment and conclusion are presented. (authors)

  14. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    Science.gov (United States)

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  15. Estimation of dew point temperature using neuro-fuzzy and neural network techniques

    Science.gov (United States)

    Kisi, Ozgur; Kim, Sungwon; Shiri, Jalal

    2013-11-01

    This study investigates the ability of two different artificial neural network (ANN) models, generalized regression neural networks model (GRNNM) and Kohonen self-organizing feature maps neural networks model (KSOFM), and two different adaptive neural fuzzy inference system (ANFIS) models, ANFIS model with sub-clustering identification (ANFIS-SC) and ANFIS model with grid partitioning identification (ANFIS-GP), for estimating daily dew point temperature. The climatic data that consisted of 8 years of daily records of air temperature, sunshine hours, wind speed, saturation vapor pressure, relative humidity, and dew point temperature from three weather stations, Daego, Pohang, and Ulsan, in South Korea were used in the study. The estimates of ANN and ANFIS models were compared according to the three different statistics, root mean square errors, mean absolute errors, and determination coefficient. Comparison results revealed that the ANFIS-SC, ANFIS-GP, and GRNNM models showed almost the same accuracy and they performed better than the KSOFM model. Results also indicated that the sunshine hours, wind speed, and saturation vapor pressure have little effect on dew point temperature. It was found that the dew point temperature could be successfully estimated by using T mean and R H variables.

  16. An innovative technique for estimating water saturation from capillary pressure in clastic reservoirs

    Science.gov (United States)

    Adeoti, Lukumon; Ayolabi, Elijah Adebowale; James, Logan

    2017-11-01

    A major drawback of old resistivity tools is the poor vertical resolution and estimation of hydrocarbon when applying water saturation (Sw) from historical resistivity method. In this study, we have provided an alternative method called saturation height function to estimate hydrocarbon in some clastic reservoirs in the Niger Delta. The saturation height function was derived from pseudo capillary pressure curves generated using modern wells with complete log data. Our method was based on the determination of rock type from log derived porosity-permeability relationship, supported by volume of shale for its classification into different zones. Leverette-J functions were derived for each rock type. Our results show good correlation between Sw from resistivity based method and Sw from pseudo capillary pressure curves in wells with modern log data. The resistivity based model overestimates Sw in some wells while Sw from the pseudo capillary pressure curves validates and predicts more accurate Sw. In addition, the result of Sw from pseudo capillary pressure curves replaces that of resistivity based model in a well where the resistivity equipment failed. The plot of hydrocarbon pore volume (HCPV) from J-function against HCPV from Archie shows that wells with high HCPV have high sand qualities and vice versa. This was further used to predict the geometry of stratigraphic units. The model presented here freshly addresses the gap in the estimation of Sw and is applicable to reservoirs of similar rock type in other frontier basins worldwide.

  17. Genetic divergence of rubber tree estimated by multivariate techniques and microsatellite markers

    Directory of Open Access Journals (Sweden)

    Lígia Regina Lima Gouvêa

    2010-01-01

    Full Text Available Genetic diversity of 60 Hevea genotypes, consisting of Asiatic, Amazonian, African and IAC clones, and pertaining to the genetic breeding program of the Agronomic Institute (IAC, Brazil, was estimated. Analyses were based on phenotypic multivariate parameters and microsatellites. Five agronomic descriptors were employed in multivariate procedures, such as Standard Euclidian Distance, Tocher clustering and principal component analysis. Genetic variability among the genotypes was estimated with 68 selected polymorphic SSRs, by way of Modified Rogers Genetic Distance and UPGMA clustering. Structure software in a Bayesian approach was used in discriminating among groups. Genetic diversity was estimated through Nei's statistics. The genotypes were clustered into 12 groups according to the Tocher method, while the molecular analysis identified six groups. In the phenotypic and microsatellite analyses, the Amazonian and IAC genotypes were distributed in several groups, whereas the Asiatic were in only a few. Observed heterozygosity ranged from 0.05 to 0.96. Both high total diversity (H T' = 0.58 and high gene differentiation (Gst' = 0.61 were observed, and indicated high genetic variation among the 60 genotypes, which may be useful for breeding programs. The analyzed agronomic parameters and SSRs markers were effective in assessing genetic diversity among Hevea genotypes, besides proving to be useful for characterizing genetic variability.

  18. Data Mining Techniques to Estimate Plutonium, Initial Enrichment, Burnup, and Cooling Time in Spent Fuel Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Trellue, Holly Renee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fugate, Michael Lynn [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tobin, Stephen Joesph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-19

    The Next Generation Safeguards Initiative (NGSI), Office of Nonproliferation and Arms Control (NPAC), National Nuclear Security Administration (NNSA) of the U.S. Department of Energy (DOE) has sponsored a multi-laboratory, university, international partner collaboration to (1) detect replaced or missing pins from spent fuel assemblies (SFA) to confirm item integrity and deter diversion, (2) determine plutonium mass and related plutonium and uranium fissile mass parameters in SFAs, and (3) verify initial enrichment (IE), burnup (BU), and cooling time (CT) of facility declaration for SFAs. A wide variety of nondestructive assay (NDA) techniques were researched to achieve these goals [Veal, 2010 and Humphrey, 2012]. In addition, the project includes two related activities with facility-specific benefits: (1) determination of heat content and (2) determination of reactivity (multiplication). In this research, a subset of 11 integrated NDA techniques was researched using data mining solutions at Los Alamos National Laboratory (LANL) for their ability to achieve the above goals.

  19. Application of fission track technique for estimation of uranium concentration in drinking waters of Punjab

    International Nuclear Information System (INIS)

    Prabhu, S.P.; Sawant, P.D.; Raj, S.S.; Kumar, A.; Sarkar, P.K.; Tripathi, R.M.

    2012-01-01

    Drinking water samples were collected from four different districts, namely Bhatinda, Mansa, Faridkot and Firozpur, of Punjab for ascertaining the U(nat.) concentrations. All samples were preserved, processed and analyzed by laser fluorimetry (LF). To ensure accuracy of the data obtained by LF, few samples (10 nos) from each district were analyzed by alpha spectrometry as well as by fission track analysis (FTA) technique. For FTA technique few μl of water sample was transferred to polythene tube, lexan detector was immersed in it and the other end of the tube was also heat-sealed. Two samples and one uranium standard were irradiated in DHRUVA reactor. Irradiated detectors were chemically etched and tracks counted using an optical microscope. Uranium concentrations in samples ranged from 3.2 to 60.5 ppb and were comparable with those observed by LF. (author)

  20. A differential absorption technique to estimate atmospheric total water vapor amounts

    Science.gov (United States)

    Frouin, Robert; Middleton, Elizabeth

    1990-01-01

    Vertically integrated water-vapor amounts can be remotely determined by measuring the solar radiance reflected by the earth's surface with satellites or aircraft-based instruments. The technique is based on the method by Fowle (1912, 1913) and utilizes the 0.940-micron water-vapor band to retrieve total-water-vapor data that is independent of surface reflectance properties and other atmospheric constituents. A channel combination is proposed to provide more accurate results, the SE-590 spectrometer is used to verify the data, and the effects of atmospheric photon backscattering is examined. The spectrometer and radiosonde data confirm the accuracy of using a narrow and a wide channel centered on the same wavelength to determine water vapor amounts. The technique is suitable for cloudless conditions and can contribute to atmospheric corrections of land-surface parameters.

  1. Application of the luminescence single-aliquot technique for dose estimation in the Marmara Sea

    International Nuclear Information System (INIS)

    Tanir, Guenes; Sencan, Emine; Boeluekdemir, M. Hicabi; Tuerkoez, M. Burak; Tel, Eyuep

    2005-01-01

    The aim of this study is to obtain the equivalent dose, which is the important quantity for all the studies related to the use of luminescence in dating sediments. Recent advances in luminescence dating have led to increasing application of the technique to sediment from the depositional environmental samples. The sample used in this study is the active main fault sample that was collected from the Sea of Marmara in NW Turkey. Equivalent dose was measured using both the multiple-aliquots and the single-aliquot techniques. In this study single aliquot regeneration on additive dose (SARA) procedure was also used. The result obtained was not in agreement with the results evaluated from the multiple-aliquots procedure. So a simple modification was suggested for SARA procedure. In our modified procedure the calculated dose (D) values were obtained by using the additive dose protocol instead of regeneration protocol

  2. Skill Assessment of An Hybrid Technique To Estimate Quantitative Precipitation Forecast For Galicia (nw Spain)

    Science.gov (United States)

    Lage, A.; Taboada, J. J.

    Precipitation is the most obvious of the weather elements in its effects on normal life. Numerical weather prediction (NWP) is generally used to produce quantitative precip- itation forecast (QPF) beyond the 1-3 h time frame. These models often fail to predict small-scale variations of rain because of spin-up problems and their coarse spatial and temporal resolution (Antolik, 2000). Moreover, there are some uncertainties about the behaviour of the NWP models in extreme situations (de Bruijn and Brandsma, 2000). Hybrid techniques, combining the benefits of NWP and statistical approaches in a flexible way, are very useful to achieve a good QPF. In this work, a new technique of QPF for Galicia (NW of Spain) is presented. This region has a percentage of rainy days per year greater than 50% with quantities that may cause floods, with human and economical damages. The technique is composed of a NWP model (ARPS) and a statistical downscaling process based on an automated classification scheme of at- mospheric circulation patterns for the Iberian Peninsula (J. Ribalaygua and R. Boren, 1995). Results show that QPF for Galicia is improved using this hybrid technique. [1] Antolik, M.S. 2000 "An Overview of the National Weather Service's centralized statistical quantitative precipitation forecasts". Journal of Hydrology, 239, pp:306- 337. [2] de Bruijn, E.I.F and T. Brandsma "Rainfall prediction for a flooding event in Ireland caused by the remnants of Hurricane Charley". Journal of Hydrology, 239, pp:148-161. [3] Ribalaygua, J. and Boren R. "Clasificación de patrones espaciales de precipitación diaria sobre la España Peninsular". Informes N 3 y 4 del Servicio de Análisis e Investigación del Clima. Instituto Nacional de Meteorología. Madrid. 53 pp.

  3. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    Energy Technology Data Exchange (ETDEWEB)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu [Department of Physics and Astronomy, University of British Columbia, Vancouver V5Z 1L8 (Canada); Celler, Anna [Department of Radiology, University of British Columbia, Vancouver V5Z 1L8 (Canada)

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming the same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90

  4. Estimating surface soil moisture from SMAP observations using a Neural Network technique.

    Science.gov (United States)

    Kolassa, J; Reichle, R H; Liu, Q; Alemohammad, S H; Gentine, P; Aida, K; Asanuma, J; Bircher, S; Caldwell, T; Colliander, A; Cosh, M; Collins, C Holifield; Jackson, T J; Martínez-Fernández, J; McNairn, H; Pacheco, A; Thibeault, M; Walker, J P

    2018-01-01

    A Neural Network (NN) algorithm was developed to estimate global surface soil moisture for April 2015 to March 2017 with a 2-3 day repeat frequency using passive microwave observations from the Soil Moisture Active Passive (SMAP) satellite, surface soil temperatures from the NASA Goddard Earth Observing System Model version 5 (GEOS-5) land modeling system, and Moderate Resolution Imaging Spectroradiometer-based vegetation water content. The NN was trained on GEOS-5 soil moisture target data, making the NN estimates consistent with the GEOS-5 climatology, such that they may ultimately be assimilated into this model without further bias correction. Evaluated against in situ soil moisture measurements, the average unbiased root mean square error (ubRMSE), correlation and anomaly correlation of the NN retrievals were 0.037 m 3 m -3 , 0.70 and 0.66, respectively, against SMAP core validation site measurements and 0.026 m 3 m -3 , 0.58 and 0.48, respectively, against International Soil Moisture Network (ISMN) measurements. At the core validation sites, the NN retrievals have a significantly higher skill than the GEOS-5 model estimates and a slightly lower correlation skill than the SMAP Level-2 Passive (L2P) product. The feasibility of the NN method was reflected by a lower ubRMSE compared to the L2P retrievals as well as a higher skill when ancillary parameters in physically-based retrievals were uncertain. Against ISMN measurements, the skill of the two retrieval products was more comparable. A triple collocation analysis against Advanced Microwave Scanning Radiometer 2 (AMSR2) and Advanced Scatterometer (ASCAT) soil moisture retrievals showed that the NN and L2P retrieval errors have a similar spatial distribution, but the NN retrieval errors are generally lower in densely vegetated regions and transition zones.

  5. Solar resources estimation combining digital terrain models and satellite images techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bosch, J.L.; Batlles, F.J. [Universidad de Almeria, Departamento de Fisica Aplicada, Ctra. Sacramento s/n, 04120-Almeria (Spain); Zarzalejo, L.F. [CIEMAT, Departamento de Energia, Madrid (Spain); Lopez, G. [EPS-Universidad de Huelva, Departamento de Ingenieria Electrica y Termica, Huelva (Spain)

    2010-12-15

    One of the most important steps to make use of any renewable energy is to perform an accurate estimation of the resource that has to be exploited. In the designing process of both active and passive solar energy systems, radiation data is required for the site, with proper spatial resolution. Generally, a radiometric stations network is used in this evaluation, but when they are too dispersed or not available for the study area, satellite images can be utilized as indirect solar radiation measurements. Although satellite images cover wide areas with a good acquisition frequency they usually have a poor spatial resolution limited by the size of the image pixel, and irradiation must be interpolated to evaluate solar irradiation at a sub-pixel scale. When pixels are located in flat and homogeneous areas, correlation of solar irradiation is relatively high, and classic interpolation can provide a good estimation. However, in complex topography zones, data interpolation is not adequate and the use of Digital Terrain Model (DTM) information can be helpful. In this work, daily solar irradiation is estimated for a wide mountainous area using a combination of Meteosat satellite images and a DTM, with the advantage of avoiding the necessity of ground measurements. This methodology utilizes a modified Heliosat-2 model, and applies for all sky conditions; it also introduces a horizon calculation of the DTM points and accounts for the effect of snow covers. Model performance has been evaluated against data measured in 12 radiometric stations, with results in terms of the Root Mean Square Error (RMSE) of 10%, and a Mean Bias Error (MBE) of +2%, both expressed as a percentage of the mean value measured. (author)

  6. New Technique for TOC Estimation Based on Thermal Core Logging in Low-Permeable Formations (Bazhen fm.)

    Science.gov (United States)

    Popov, Evgeny; Popov, Yury; Spasennykh, Mikhail; Kozlova, Elena; Chekhonin, Evgeny; Zagranovskaya, Dzhuliya; Belenkaya, Irina; Alekseev, Aleksey

    2016-04-01

    A practical method of organic-rich intervals identifying within the low-permeable dispersive rocks based on thermal conductivity measurements along the core is presented. Non-destructive non-contact thermal core logging was performed with optical scanning technique on 4 685 full size core samples from 7 wells drilled in four low-permeable zones of the Bazhen formation (B.fm.) in the Western Siberia (Russia). The method employs continuous simultaneous measurements of rock anisotropy, volumetric heat capacity, thermal anisotropy coefficient and thermal heterogeneity factor along the cores allowing the high vertical resolution (of up to 1-2 mm). B.fm. rock matrix thermal conductivity was observed to be essentially stable within the range of 2.5-2.7 W/(m*K). However, stable matrix thermal conductivity along with the high thermal anisotropy coefficient is characteristic for B.fm. sediments due to the low rock porosity values. It is shown experimentally that thermal parameters measured relate linearly to organic richness rather than to porosity coefficient deviations. Thus, a new technique employing the transformation of the thermal conductivity profiles into continuous profiles of total organic carbon (TOC) values along the core was developed. Comparison of TOC values, estimated from the thermal conductivity values, with experimental pyrolytic TOC estimations of 665 samples from the cores using the Rock-Eval and HAWK instruments demonstrated high efficiency of the new technique for the organic rich intervals separation. The data obtained with the new technique are essential for the SR hydrocarbon generation potential, for basin and petroleum system modeling application, and estimation of hydrocarbon reserves. The method allows for the TOC richness to be accurately assessed using the thermal well logs. The research work was done with financial support of the Russian Ministry of Education and Science (unique identification number RFMEFI58114X0008).

  7. Qualitative performance comparison of reactivity estimation between the extended Kalman filter technique and the inverse point kinetic method

    International Nuclear Information System (INIS)

    Shimazu, Y.; Rooijen, W.F.G. van

    2014-01-01

    Highlights: • Estimation of the reactivity of nuclear reactor based on neutron flux measurements. • Comparison of the traditional method, and the new approach based on Extended Kalman Filtering (EKF). • Estimation accuracy depends on filter parameters, the selection of which is described in this paper. • The EKF algorithm is preferred if the signal to noise ratio is low (low flux situation). • The accuracy of the EKF depends on the ratio of the filter coefficients. - Abstract: The Extended Kalman Filtering (EKF) technique has been applied for estimation of subcriticality with a good noise filtering and accuracy. The Inverse Point Kinetic (IPK) method has also been widely used for reactivity estimation. The important parameters for the EKF estimation are the process noise covariance, and the measurement noise covariance. However the optimal selection is quite difficult. On the other hand, there is only one parameter in the IPK method, namely the time constant for the first order delay filter. Thus, the selection of this parameter is quite easy. Thus, it is required to give certain idea for the selection of which method should be selected and how to select the required parameters. From this point of view, a qualitative performance comparison is carried out

  8. Estimates of erosion on potato lands on krasnozems at Dorringo, NSW, using the caesium-137 technique

    International Nuclear Information System (INIS)

    Elliott, G.L.; Cole-Clark, B.E.

    1993-01-01

    Caesium-137 measurements have been made on soil samples taken from a grid pattern in a paddock used for three spring potato crops since 1966. Total erosion was estimated from these measurements and found to average 297 t ha -1 , equivalent to 98 t ha -1 per crop (allowing for erosion during the pasture phase). Comparative erosion estimates have been made from the results of single transect sampling in a paddock used for two potato crops and in one under permanent pasture. Results suggest erosion rates of 57 t ha -1 per crop in the former site and 0.09 t ha -1 year -1 in the latter site. An erosion rate of 100 t ha -1 per crop is at least 100 times the probable soil formation rate, implies an economic resource life of a maximum 600 years and involves a cost of lost nutrients of at least $3200 per hectare. These results strongly suggest a need to both develop and adopt land management practices which will substantially reduce both soil detachment and transport. 19 refs., 3 tabs., 8 figs

  9. A Data Analysis Technique to Estimate the Thermal Characteristics of a House

    Directory of Open Access Journals (Sweden)

    Seyed Amin Tabatabaei

    2017-09-01

    Full Text Available Almost one third of the energy is used in the residential sector, and space heating is the largest part of energy consumption in our houses. Knowledge about the thermal characteristics of a house can increase the awareness of homeowners about the options to save energy, for example by showing that there is room for improvement of the insulation level. However, calculating the exact value of these characteristics is not possible without precise thermal experiments. In this paper, we propose a method to automatically estimate two of the most important thermal characteristics of a house, i.e., the loss rate and the heat capacity, based on collected data about the temperature and gas usage. The method is evaluated with a data set that has been collected in a real-life case study. Although a ground truth is lacking, the analyses show that there is evidence that this method could provide a feasible way to estimate those values from the thermostat data. More detailed data about the houses in which the data was collected is required to draw stronger conclusions. We conclude that the proposed method is a promising way to add energy saving advice to smart thermostats.

  10. Estimating Horizontal Displacement between DEMs by Means of Particle Image Velocimetry Techniques

    Directory of Open Access Journals (Sweden)

    Juan F. Reinoso

    2015-12-01

    Full Text Available To date, digital terrain model (DTM accuracy has been studied almost exclusively by computing its height variable. However, the largely ignored horizontal component bears a great influence on the positional accuracy of certain linear features, e.g., in hydrological features. In an effort to fill this gap, we propose a means of measurement different from the geomatic approach, involving fluid mechanics (water and air flows or aerodynamics. The particle image velocimetry (PIV algorithm is proposed as an estimator of horizontal differences between digital elevation models (DEM in grid format. After applying a scale factor to the displacement estimated by the PIV algorithm, the mean error predicted is around one-seventh of the cell size of the DEM with the greatest spatial resolution, and around one-nineteenth of the cell size of the DEM with the least spatial resolution. Our methodology allows all kinds of DTMs to be compared once they are transformed into DEM format, while also allowing comparison of data from diverse capture methods, i.e., LiDAR versus photogrammetric data sources.

  11. Estimates of the non-market value of sea turtles in Tobago using stated preference techniques.

    Science.gov (United States)

    Cazabon-Mannette, Michelle; Schuhmann, Peter W; Hailey, Adrian; Horrocks, Julia

    2017-05-01

    Economic benefits are derived from sea turtle tourism all over the world. Sea turtles also add value to underwater recreation and convey non-use values. This study examines the non-market value of sea turtles in Tobago. We use a choice experiment to estimate the value of sea turtle encounters to recreational SCUBA divers and the contingent valuation method to estimate the value of sea turtles to international tourists. Results indicate that turtle encounters were the most important dive attribute among those examined. Divers are willing to pay over US$62 per two tank dive for the first turtle encounter. The mean WTP for turtle conservation among international visitors to Tobago was US$31.13 which reflects a significant non-use value associated with actions targeted at keeping sea turtles from going extinct. These results illustrate significant non-use and non-consumptive use value of sea turtles, and highlight the importance of sea turtle conservation efforts in Tobago and throughout the Caribbean region. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. The application of digital imaging techniques in the in vivo estimation of the body composition of pigs: a review

    International Nuclear Information System (INIS)

    Szabo, C.; Babinszky, L.; Verstegen, M.W.A.; Vangen, O.; Jansman, A.J.M.; Kanis, E.

    1999-01-01

    Calorimetry and comparative slaughter measurement are techniques widely used to measure chemical body composition of pigs, while dissection is the standard method to determine physical (tissue) composition of the body. The disadvantage of calorimetry is the small number of observations possible, while of comparative slaughter and dissection the fact that examinations can be made only once on the same pig. The non-invasive imaging techniques, such as real time ultrasound, computer tomography (CT) and magnetic resonance imaging (MRI) could constitute a valuable tool for the estimation of body composition performed in series on living animals. The aim of this paper was to compare these methods. Ultrasound equipment entails a relatively low cost and great mobility, but provides less information and lower accuracy about whole body composition compared to CT and MRI. For this reason the ultrasound technique will in the future most probably remain for field application. Computer tomography and MRI with standardized and verified application methods could provide a tool to substitute whole body analysis and physical dissection. With respect to the disadvantages of CT and MRI techniques, the expense and the lack of portability should be cited, and for these reasons it is most likely that in future such techniques will be applied only in research and breeding programs

  13. A new validation technique for estimations of body segment inertia tensors: Principal axes of inertia do matter.

    Science.gov (United States)

    Rossi, Marcel M; Alderson, Jacqueline; El-Sallam, Amar; Dowling, James; Reinbolt, Jeffrey; Donnelly, Cyril J

    2016-12-08

    The aims of this study were to: (i) establish a new criterion method to validate inertia tensor estimates by setting the experimental angular velocity data of an airborne objects as ground truth against simulations run with the estimated tensors, and (ii) test the sensitivity of the simulations to changes in the inertia tensor components. A rigid steel cylinder was covered with reflective kinematic markers and projected through a calibrated motion capture volume. Simulations of the airborne motion were run with two models, using inertia tensor estimated with geometric formula or the compound pendulum technique. The deviation angles between experimental (ground truth) and simulated angular velocity vectors and the root mean squared deviation angle were computed for every simulation. Monte Carlo analyses were performed to assess the sensitivity of simulations to changes in magnitude of principal moments of inertia within ±10% and to changes in orientation of principal axes of inertia within ±10° (of the geometric-based inertia tensor). Root mean squared deviation angles ranged between 2.9° and 4.3° for the inertia tensor estimated geometrically, and between 11.7° and 15.2° for the compound pendulum values. Errors up to 10% in magnitude of principal moments of inertia yielded root mean squared deviation angles ranging between 3.2° and 6.6°, and between 5.5° and 7.9° when lumped with errors of 10° in principal axes of inertia orientation. The proposed technique can effectively validate inertia tensors from novel estimation methods of body segment inertial parameter. Principal axes of inertia orientation should not be neglected when modelling human/animal mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Estimation of genetic variability and heritability of wheat agronomic traits resulted from some gamma rays irradiation techniques

    International Nuclear Information System (INIS)

    Wijaya Murti Indriatama; Trikoesoemaningtyas; Syarifah Iis Aisyah; Soeranto Human

    2016-01-01

    Gamma irradiation techniques have significant effect on frequency and spectrum of macro-mutation but the study of its effect on micro-mutation that related to genetic variability on mutated population is very limited. The aim of this research was to study the effect of gamma irradiation techniques on genetic variability and heritability of wheat agronomic characters at M2 generation. This research was conducted from July to November 2014, at Cibadak experimental station, Indonesian Center for Agricultural Biotechnology and Genetic Resources Research and Development, Ministry of Agriculture. Three introduced wheat breeding lines (F-44, Kiran-95 & WL-711) were treated by 3 gamma irradiation techniques (acute, fractionated and intermittent). M1 generation of combination treatments were planted and harvested its spike individually per plants. As M2 generation, seeds of 75 M1 spike were planted at the field with one row one spike method and evaluated on the agronomic characters and its genetic components. The used of gamma irradiation techniques decreased mean but increased range values of agronomic traits in M2 populations. Fractionated irradiation induced higher mean and wider range on spike length and number of spike let per spike than other irradiation techniques. Fractionated and intermittent irradiation resulted greater variability of grain weight per plant than acute irradiation. The number of tillers, spike weight, grain weight per spike and grain weight per plant on M2 population resulted from induction of three gamma irradiation techniques have high estimated heritability and broad sense of genetic variability coefficient values. The three gamma irradiation techniques increased genetic variability of agronomic traits on M2 populations, except plant height. (author)

  15. A Simple Technique to Estimate the Flammability Index of Moroccan Forest Fuels

    Directory of Open Access Journals (Sweden)

    M'Hamed Hachmi

    2011-01-01

    Full Text Available A formula to estimate forest fuel flammability index (FI is proposed, integrating three species flammability parameters: time to ignition, time of combustion, and flame height. Thirty-one (31 Moroccan tree and shrub species were tested within a wide range of fuel moisture contents. Six species flammability classes were identified. An ANOVA of the FI-values was performed and analyzed using four different sample sizes of 12, 24, 36, and 50 flammability tests. Fuel humidity content is inversely correlated to the FI-value, and the linear model appears to be the most adequate equation that may predict the hypothetical threshold-point of humidity of extinction. Most of the Moroccan forest fuels studied are classified as moderately flammable to flammable species based on their average humidity content, calculated for the summer period from July to September.

  16. ℋ-matrix techniques for approximating large covariance matrices and estimating its parameters

    KAUST Repository

    Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Keyes, David E.

    2016-01-01

    In this work the task is to use the available measurements to estimate unknown hyper-parameters (variance, smoothness parameter and covariance length) of the covariance function. We do it by maximizing the joint log-likelihood function. This is a non-convex and non-linear problem. To overcome cubic complexity in linear algebra, we approximate the discretised covariance function in the hierarchical (ℋ-) matrix format. The ℋ-matrix format has a log-linear computational cost and storage O(knlogn), where rank k is a small integer. On each iteration step of the optimization procedure the covariance matrix itself, its determinant and its Cholesky decomposition are recomputed within ℋ-matrix format. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)

  17. Estimation of sea level variations with GPS/GLONASS-reflectometry technique

    Science.gov (United States)

    Padokhin, A. M.; Kurbatov, G. A.; Andreeva, E. S.; Nesterov, I. A.; Nazarenko, M. O.; Berbeneva, N. A.; Karlysheva, A. V.

    2017-11-01

    In the present paper we study GNSS - reflectometry methods for estimation of sea level variations using a single GNSSreceiver, which are based on the multipath propagation effects caused by the reflection of navigational signals from the sea surface. Such multipath propagation results in the appearance of the interference pattern in the Signal-to-Noise Ratio (SNR) of GNSS signals at small satellite elevation angles, which parameters are determined by the wavelength of the navigational signal and height of the antenna phase center above the reflecting sea surface. In current work we used GPS and GLONASS signals and measurements at two working frequencies of both systems to study sea level variations which almost doubles the amount of observations compared to GPS-only tide gauge. For UNAVCO sc02 station and collocated Friday Harbor NOAA tide gauge we show good agreement between GNSS-reflectometry and traditional mareograph sea level data.

  18. ℋ-matrix techniques for approximating large covariance matrices and estimating its parameters

    KAUST Repository

    Litvinenko, Alexander

    2016-10-25

    In this work the task is to use the available measurements to estimate unknown hyper-parameters (variance, smoothness parameter and covariance length) of the covariance function. We do it by maximizing the joint log-likelihood function. This is a non-convex and non-linear problem. To overcome cubic complexity in linear algebra, we approximate the discretised covariance function in the hierarchical (ℋ-) matrix format. The ℋ-matrix format has a log-linear computational cost and storage O(knlogn), where rank k is a small integer. On each iteration step of the optimization procedure the covariance matrix itself, its determinant and its Cholesky decomposition are recomputed within ℋ-matrix format. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)

  19. Comparative methane estimation from cattle based on total CO2 production using different techniques

    Directory of Open Access Journals (Sweden)

    Md N. Haque

    2017-06-01

    Full Text Available The objective of this study was to compare the precision of CH4 estimates using calculated CO2 (HP by the CO2 method (CO2T and measured CO2 in the respiration chamber (CO2R. The CO2R and CO2T study was conducted as a 3 × 3 Latin square design where 3 Dexter heifers were allocated to metabolic cages for 3 periods. Each period consisted of 2 weeks of adaptation followed by 1 week of measurement with the CO2R and CO2T. The average body weight of the heifer was 226 ± 11 kg (means ± SD. They were fed a total mixed ration, twice daily, with 1 of 3 supplements: wheat (W, molasses (M, or molasses mixed with sodium bicarbonate (Mbic. The dry mater intake (DMI; kg/day was significantly greater (P < 0.001 in the metabolic cage compared with that in the respiration chamber. The daily CH4 (L/day emission was strongly correlated (r = 0.78 between CO2T and CO2R. The daily CH4 (L/kg DMI emission by the CO2T was in the same magnitude as by the CO2R. The measured CO2 (L/day production in the respiration chamber was not different (P = 0.39 from the calculated CO2 production using the CO2T. This result concludes a reasonable accuracy and precision of CH4 estimation by the CO2T compared with the CO2R.

  20. Estimation of transcapillary transport of palmitate by the multiple indicator dilution technique

    International Nuclear Information System (INIS)

    Little, S.E.; van der Vusse, G.J.; Bassingthwaighte, J.B.

    1986-01-01

    From the outflow concentration-time curves for 14 C-palmitate, intravascular ( 131 I-albumin) and extracellular ( 3 H-sucrose) tracers, palmitate extraction was estimated in rabbit hearts Langendorff-perfused at a constant flow with nonrecirculated palmitate-albumin Kreb's Ringer buffer. Contamination of 131 I-albumin with free 13 $ 1 I - (typically 1%) or aggregated albumin (typically 0.1 to 0.5%) greatly alters the shapes of the tails of the curves after 2 albumin transit times, vitiating accurate estimation of cellular permeability or reactions. Buffers were prepared by adding K + -palmitate (made using K 2 CO 3 ) to albumin solutions. The final concentrations (after dialysing twice and filtering through a 1.2 μ filter) of K + , HCO 3 , and CO 3 were 5.0 mM, 23.5 mM and 0.5 mM respectively, pH was between 7.35 and 7.40 for several hours. The bolus of tracers was prepared by mixing 131 I-albumin (dialysed to remove I - , and filtered through a 0.2 μM filter to remove aggregates), K + [U- 14 C]palmitate (high specific activity) and 3 H-sucrose. Before injection the radioactive bolus is preequilibrated with the perfusate at bolus:perfusate ratio of 1:10. Glacial acetic acid is added to the outflow samples to remove the 14 CO 2 which, if present in the sample, would be interpreted as increased palmitate back diffusion. The peak extractions of palmitate were about 40% at perfusate palmitate concentrations of 0.02 to 1.0 mM, 0.4 mM albumin, at a flow of 5 mlg -1 2] 1 , showing capillary permeability-surface area product to be roughly constant. This suggests either than transcapillary palmitate transport is passive or that a transporter interacts with the albumin-palmitate complex

  1. Estimation of trace levels of plutonium in urine samples by fission track technique

    International Nuclear Information System (INIS)

    Sawant, P.D.; Prabhu, S.; Pendharkar, K.A.; Kalsi, P.C.

    2009-01-01

    Individual monitoring of radiation workers handling Pu in various nuclear installations requires the detection of trace levels of plutonium in bioassay samples. It is necessary to develop methods that can detect urinary excretion of Pu in fraction of mBq range. Therefore, a sensitive method such as fission track analysis has been developed for the measurement of trace levels of Pu in bioassay samples. In this technique, chemically separated plutonium from the sample and a Pu standard were electrodeposited on planchettes and covered with Lexan solid state nuclear track detector (SSNTD) and irradiated with thermal neutrons in APSARA reactor of Bhabha Atomic Research Centre, India. The fission track densities in the Lexan films of the sample and the standard were used to calculate the amount of Pu in the sample. The minimum amount of Pu that can be analyzed by this method using doubly distilled electronic grade (E. G.) reagents is about 12 μBq/L. (author)

  2. Use of adsorption and gas chromatographic techniques in estimating biodegradation of indigenous crude oils

    International Nuclear Information System (INIS)

    Kokub, D.; Allahi, A.; Shafeeq, M.; Khalid, Z.M.; Malik, K.A.; Hussain, A.

    1993-01-01

    Indigenous crude oils could be degraded and emulsified upto varying degree by locally isolated bacteria. Degradation and emulsification was found to be dependent upon the chemical composition of the crude oils. Tando Alum and Khashkheli crude oils were emulsified in 27 and 33 days of incubation respectively. While Joyamair crude oil and not emulsify even mainly due to high viscosity of this oil. Using adsorption chromatographic technique, oil from control (uninoculated) and bio degraded flasks was fractioned into the deasphaltened oil containing saturate, aromatic, NSO (nitrogen, sulphur, oxygen) containing hydrocarbons) and soluble asphaltenes. Saturate fractions from control and degraded oil were further analysed by gas liquid chromatography. From these analyses, it was observed that saturate fraction was preferentially utilized and the crude oils having greater contents of saturate fraction were better emulsified than those low in this fraction. Utilization of various fractions of crude oils was in the order saturate> aromatic> NSO. (author)

  3. A first look at roadheader construction and estimating techniques for site characterization at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Neil, D.M.; Taylor, D.L.

    1991-01-01

    The Yucca Mountain site characterization program will be based on mechanical excavation techniques for the mined repository construction and development. Tunnel Boring Machines (TBM's), Mobile Miners (MM), Raiseborers (RB), Blind Hole Shaft Boring Machines (BHSB), and Roadheaders (RH) have been selected as the mechanical excavation machines most suited to mine the densely welded and non-welded tuffs of the Topopah Springs and Calico Hills members. Heavy duty RH in the 70 to 100 ton class with 300 Kw cutter motors have been evaluated and formulas developed to predict machine performance based on the rock physical properties and the results of Linear Cutting Machine (LCM) tests done at the Colorado School of Mines (CSM) for Sandia National Labs. (SNL)

  4. Photometric estimation of plutonium in product solutions and acid waste solutions using flow injection analysis technique

    International Nuclear Information System (INIS)

    Dhas, A.J.A.; Dharmapurikar, G.R.; Kumaraguru, K.; Vijayan, K.; Kapoor, S.C.; Ramanujam, A.

    1995-01-01

    Flow injection analysis technique is employed for the measurement of plutonium concentrations in product nitrate solutions by measuring the absorbance of Pu(III) at 565 nm and of Pu(IV) at 470 nm, using a Metrohm 662 photometer, with a pyrex glass tube of 2 nm (ID) inserted in the light path of the detector serving as a flow cell. The photometer detector never comes in contact with radioactive solution. In the case of acid waste solutions Pu is first purified by extraction chromatography with 2-ethyl hexyl hydrogen 2 ethyl hexyl phosphonate (KSM 17)- chromosorb and the Pu in the eluate in complexed with Arsenazo III followed by the measured of absorbance at 665 nm. Absorbance of reference solutions in the desired concentration ranges are measured to calibrate the system. The results obtained agree with the reference values within ±2.0%. (author). 3 refs., 1 tab

  5. The Technique for the Numerical Tolerances Estimations in the Construction of Compensated Accelerating Structures

    CERN Document Server

    Paramonov, V V

    2004-01-01

    The requirements to the cells manufacturing precision and tining in the multi-cells accelerating structures construction came from the required accelerating field uniformity, based on the beam dynamics demands. The standard deviation of the field distribution depends on accelerating and coupling modes frequencies deviations, stop-band width and coupling coefficient deviations. These deviations can be determined from 3D fields distribution for accelerating and coupling modes and the cells surface displacements. With modern software it can be done separately for every specified part of the cell surface. Finally, the cell surface displacements are defined from the cell dimensions deviations. This technique allows both to define qualitatively the critical regions and to optimize quantitatively the tolerances definition.

  6. Aspergillus specific IgE estimation by radioallergosorbent technique (RAST) in obstructive airways disease at Agra

    International Nuclear Information System (INIS)

    Sharma, S.K.; Singh, R.; Mehrotra, M.P.; Patney, N.L.; Sachan, A.S.; Shiromany, A.

    1986-01-01

    The radioallergosorbent technique (RAST) was used to measure the levels of Aspergillus specific IgE in 25 normal controls, 25 cases of extrinsic bronchial asthma and 25 cases of allergic broncho-pulmonary aspergillosis with a view to study the clinical role and its correlation with sputum culture, skin sensitivity and severity of airways obstruction. The test was performed using Pharmacia diagnostic kits with antigen derived from Aspergillus fumigatus. Abnormal levels of Aspergillus specific IgE were observed in 84 per cent cases of bronchial asthma but none of the controls. 86.7 per cent of all cases with positive skin test had positive radioallergosorbent test and there was no false positive reaction. There was a positive correlation of Aspergillus specific IgE with skin test positivity and with FEV 1 /FVC per cent. (author)

  7. Comparison of internal radiation doses estimated by MIRD and voxel techniques for a ''family'' of phantoms

    International Nuclear Information System (INIS)

    Smith, T.

    2000-01-01

    The aim of this study was to use a new system of realistic voxel phantoms, based on computed tomography scanning of humans, to assess its ability to specify the internal dosimetry of selected human examples in comparison with the well-established MIRD system of mathematical anthropomorphic phantoms. Differences in specific absorbed fractions between the two systems were inferred by using organ dose estimates as the end point for comparison. A ''family'' of voxel phantoms, comprising an 8-week-old baby, a 7-year-old child and a 38-year-old adult, was used and a close match to these was made by interpolating between organ doses estimated for pairs of the series of six MIRD phantoms. Using both systems, doses were calculated for up to 22 organs for four radiopharmaceuticals with widely differing biodistribution and emission characteristics (technetium-99m pertechnetate, administered without thyroid blocking; iodine-123 iodide; indium-111 antimyosin; oxygen-15 water). Organ dose estimates under the MIRD system were derived using the software MIRDOSE 3, which incorporates specific absorbed fraction (SAF) values for the MIRD phantom series. The voxel system uses software based on the same dose calculation formula in conjunction with SAF values determined by Monte Carlo analysis at the GSF of the three voxel phantoms. Effective doses were also compared. Substantial differences in organ weights were observed between the two systems, 18% differing by more than a factor of 2. Out of a total of 238 organ dose comparisons, 5% differed by more than a factor of 2 between the systems; these included some doses to walls of the GI tract, a significant result in relation to their high tissue weighting factors. Some of the largest differences in dose were associated with organs of lower significance in terms of radiosensitivity (e.g. thymus). In this small series, voxel organ doses tended to exceed MIRD values, on average, and a 10% difference was significant when all 238 organ doses

  8. Techniques and software tools for estimating ultrasonic signal-to-noise ratios

    Science.gov (United States)

    Chiou, Chien-Ping; Margetan, Frank J.; McKillip, Matthew; Engle, Brady J.; Roberts, Ronald A.

    2016-02-01

    At Iowa State University's Center for Nondestructive Evaluation (ISU CNDE), the use of models to simulate ultrasonic inspections has played a key role in R&D efforts for over 30 years. To this end a series of wave propagation models, flaw response models, and microstructural backscatter models have been developed to address inspection problems of interest. One use of the combined models is the estimation of signal-to-noise ratios (S/N) in circumstances where backscatter from the microstructure (grain noise) acts to mask sonic echoes from internal defects. Such S/N models have been used in the past to address questions of inspection optimization and reliability. Under the sponsorship of the National Science Foundation's Industry/University Cooperative Research Center at ISU, an effort was recently initiated to improve existing research-grade software by adding graphical user interface (GUI) to become user friendly tools for the rapid estimation of S/N for ultrasonic inspections of metals. The software combines: (1) a Python-based GUI for specifying an inspection scenario and displaying results; and (2) a Fortran-based engine for computing defect signal and backscattered grain noise characteristics. The latter makes use of several models including: the Multi-Gaussian Beam Model for computing sonic fields radiated by commercial transducers; the Thompson-Gray Model for the response from an internal defect; the Independent Scatterer Model for backscattered grain noise; and the Stanke-Kino Unified Model for attenuation. The initial emphasis was on reformulating the research-grade code into a suitable modular form, adding the graphical user interface and performing computations rapidly and robustly. Thus the initial inspection problem being addressed is relatively simple. A normal-incidence pulse/echo immersion inspection is simulated for a curved metal component having a non-uniform microstructure, specifically an equiaxed, untextured microstructure in which the average

  9. The advantages, and challenges, in using multiple techniques in the estimation of surface water-groundwater fluxes.

    Science.gov (United States)

    Shanafield, M.; Cook, P. G.

    2014-12-01

    When estimating surface water-groundwater fluxes, the use of complimentary techniques helps to fill in uncertainties in any individual method, and to potentially gain a better understanding of spatial and temporal variability in a system. It can also be a way of preventing the loss of data during infrequent and unpredictable flow events. For example, much of arid Australia relies on groundwater, which is recharged by streamflow through ephemeral streams during flood events. Three recent surface water/groundwater investigations from arid Australian systems provide good examples of how using multiple field and analysis techniques can help to more fully characterize surface water-groundwater fluxes, but can also result in conflicting values over varying spatial and temporal scales. In the Pilbara region of Western Australia, combining streambed radon measurements, vertical heat transport modeling, and a tracer test helped constrain very low streambed residence times, which are on the order of minutes. Spatial and temporal variability between the methods yielded hyporheic exchange estimates between 10-4 m2 s-1 and 4.2 x 10-2 m2 s-1. In South Australia, three-dimensional heat transport modeling captured heterogeneity within 20 square meters of streambed, identifying areas of sandy soil (flux rates of up to 3 m d-1) and clay (flux rates too slow to be accurately characterized). Streamflow front modeling showed similar flux rates, but averaged over 100 m long stream segments for a 1.6 km reach. Finally, in central Australia, several methods are used to decipher whether any of the flow down a highly ephemeral river contributes to regional groundwater recharge, showing that evaporation and evapotranspiration likely accounts for all of the infiltration into the perched aquifer. Lessons learned from these examples demonstrate the influences of the spatial and temporal variability between techniques on estimated fluxes.

  10. Using Convective Stratiform Technique (CST) method to estimate rainfall (case study in Bali, December 14th 2016)

    Science.gov (United States)

    Vista Wulandari, Ayu; Rizki Pratama, Khafid; Ismail, Prayoga

    2018-05-01

    Accurate and realtime data in wide spatial space at this time is still a problem because of the unavailability of observation of rainfall in each region. Weather satellites have a very wide range of observations and can be used to determine rainfall variability with better resolution compared with a limited direct observation. Utilization of Himawari-8 satellite data in estimating rainfall using Convective Stratiform Technique (CST) method. The CST method is performed by separating convective and stratiform cloud components using infrared channel satellite data. Cloud components are classified by slope because the physical and dynamic growth processes are very different. This research was conducted in Bali area on December 14, 2016 by verifying the result of CST process with rainfall data from Ngurah Rai Meteorology Station Bali. It is found that CST method result had simililar value with data observation in Ngurah Rai meteorological station, so it assumed that CST method can be used for rainfall estimation in Bali region.

  11. Application of fission track technique for estimation of uranium concentration in drinking waters of Punjab

    International Nuclear Information System (INIS)

    Prabhu, S.P.; Raj, Sanu S.; Sawant, P.D.; Kumar, Ajay; Sarkar, P.K.; Tripathi, R.M.

    2010-01-01

    Full text: Drinking water samples were collected from four different districts, namely Bhatinda, Mansa, Faridkot and Firozpur, of Punjab for ascertaining the U(nat.) concentrations. The samples were collected from bore wells, hand pumps, tube wells and treated municipal water supply. All these samples (235 nos.) collected were preserved and processed by following the international standard protocol and analyzed by Laser Fluorimetry. Results of analysis by laser fluorimetry have been already reported. To ensure accuracy of the data obtained by laser fluorimetry, few samples (10 nos) from each district were analyzed by alpha spectrometry as well as by fission track analysis (FTA) technique. FTA in solution media for uranium has been already standardized in Bioassay laboratory of Health Physics Division. Few of drinking water sample was directly transferred to polythene tube sealed at one end. Lexan detector with proper identification mark was immersed in the samples and the other open end of the tube was also heat-sealed. Two tubes containing samples and one containing uranium standard (80 ppb) were irradiated in the Pneumatic Carrier Facility (PCF) of DHRUVA reactor. The Lexan detectors were then chemically etched and tracks were counted under an optical microscope at 400X magnification. Concentration of uranium in sample was determined by comparison technique. Quality assurance was carried out by replicate analysis and by analysis of standard reference materials. Uranium concentration in these samples ranged from 3.2 to 60.5 ppb with an average of 28.8 ppb. A t-test analysis for paired data was done to compare the results obtained by FTA and those obtained by laser fluorimeter. The calculated value for t is -1.19, which is greater than the tabulated value of t for 40 observations (-2.02 at 95% confidence level). This shows that the results of the measurements carried out by the FTA and laser fluorimetry are not significantly different. The preliminary studies

  12. Accuracy in estimation of timber assortments and stem distribution - A comparison of airborne and terrestrial laser scanning techniques

    Science.gov (United States)

    Kankare, Ville; Vauhkonen, Jari; Tanhuanpää, Topi; Holopainen, Markus; Vastaranta, Mikko; Joensuu, Marianna; Krooks, Anssi; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri; Viitala, Risto

    2014-11-01

    Detailed information about timber assortments and diameter distributions is required in forest management. Forest owners can make better decisions concerning the timing of timber sales and forest companies can utilize more detailed information to optimize their wood supply chain from forest to factory. The objective here was to compare the accuracies of high-density laser scanning techniques for the estimation of tree-level diameter distribution and timber assortments. We also introduce a method that utilizes a combination of airborne and terrestrial laser scanning in timber assortment estimation. The study was conducted in Evo, Finland. Harvester measurements were used as a reference for 144 trees within a single clear-cut stand. The results showed that accurate tree-level timber assortments and diameter distributions can be obtained, using terrestrial laser scanning (TLS) or a combination of TLS and airborne laser scanning (ALS). Saw log volumes were estimated with higher accuracy than pulpwood volumes. The saw log volumes were estimated with relative root-mean-squared errors of 17.5% and 16.8% with TLS and a combination of TLS and ALS, respectively. The respective accuracies for pulpwood were 60.1% and 59.3%. The differences in the bucking method used also caused some large errors. In addition, tree quality factors highly affected the bucking accuracy, especially with pulpwood volume.

  13. Optimal Design for Reactivity Ratio Estimation: A Comparison of Techniques for AMPS/Acrylamide and AMPS/Acrylic Acid Copolymerizations

    Directory of Open Access Journals (Sweden)

    Alison J. Scott

    2015-11-01

    Full Text Available Water-soluble polymers of acrylamide (AAm and acrylic acid (AAc have significant potential in enhanced oil recovery, as well as in other specialty applications. To improve the shear strength of the polymer, a third comonomer, 2-acrylamido-2-methylpropane sulfonic acid (AMPS, can be added to the pre-polymerization mixture. Copolymerization kinetics of AAm/AAc are well studied, but little is known about the other comonomer pairs (AMPS/AAm and AMPS/AAc. Hence, reactivity ratios for AMPS/AAm and AMPS/AAc copolymerization must be established first. A key aspect in the estimation of reliable reactivity ratios is design of experiments, which minimizes the number of experiments and provides increased information content (resulting in more precise parameter estimates. However, design of experiments is hardly ever used during copolymerization parameter estimation schemes. In the current work, copolymerization experiments for both AMPS/AAm and AMPS/AAc are designed using two optimal techniques (Tidwell-Mortimer and the error-in-variables-model (EVM. From these optimally designed experiments, accurate reactivity ratio estimates are determined for AMPS/AAm (rAMPS = 0.18, rAAm = 0.85 and AMPS/AAc (rAMPS = 0.19, rAAc = 0.86.

  14. A positional estimation technique for an autonomous land vehicle in an unstructured environment

    Science.gov (United States)

    Talluri, Raj; Aggarwal, J. K.

    1990-01-01

    This paper presents a solution to the positional estimation problem of an autonomous land vehicle navigating in an unstructured mountainous terrain. A Digital Elevation Map (DEM) of the area in which the robot is to navigate is assumed to be given. It is also assumed that the robot is equipped with a camera that can be panned and tilted, and a device to measure the elevation of the robot above the ground surface. No recognizable landmarks are assumed to be present in the environment in which the robot is to navigate. The solution presented makes use of the DEM information, and structures the problem as a heuristic search in the DEM for the possible robot location. The shape and position of the horizon line in the image plane and the known camera geometry of the perspective projection are used as parameters to search the DEM. Various heuristics drawn from the geometric constraints are used to prune the search space significantly. The algorithm is made robust to errors in the imaging process by accounting for the worst care errors. The approach is tested using DEM data of areas in Colorado and Texas. The method is suitable for use in outdoor mobile robots and planetary rovers.

  15. Techniques for estimating flood-depth frequency relations for streams in West Virginia

    Science.gov (United States)

    Wiley, J.B.

    1987-01-01

    Multiple regression analyses are applied to data from 119 U.S. Geological Survey streamflow stations to develop equations that estimate baseline depth (depth of 50% flow duration) and 100-yr flood depth on unregulated streams in West Virginia. Drainage basin characteristics determined from the 100-yr flood depth analysis were used to develop 2-, 10-, 25-, 50-, and 500-yr regional flood depth equations. Two regions with distinct baseline depth equations and three regions with distinct flood depth equations are delineated. Drainage area is the most significant independent variable found in the central and northern areas of the state where mean basin elevation also is significant. The equations are applicable to any unregulated site in West Virginia where values of independent variables are within the range evaluated for the region. Examples of inapplicable sites include those in reaches below dams, within and directly upstream from bridge or culvert constrictions, within encroached reaches, in karst areas, and where streams flow through lakes or swamps. (Author 's abstract)

  16. Estimating spatio-temporal dynamics of stream total phosphate concentration by soft computing techniques.

    Science.gov (United States)

    Chang, Fi-John; Chen, Pin-An; Chang, Li-Chiu; Tsai, Yu-Hsuan

    2016-08-15

    This study attempts to model the spatio-temporal dynamics of total phosphate (TP) concentrations along a river for effective hydro-environmental management. We propose a systematical modeling scheme (SMS), which is an ingenious modeling process equipped with a dynamic neural network and three refined statistical methods, for reliably predicting the TP concentrations along a river simultaneously. Two different types of artificial neural network (BPNN-static neural network; NARX network-dynamic neural network) are constructed in modeling the dynamic system. The Dahan River in Taiwan is used as a study case, where ten-year seasonal water quality data collected at seven monitoring stations along the river are used for model training and validation. Results demonstrate that the NARX network can suitably capture the important dynamic features and remarkably outperforms the BPNN model, and the SMS can effectively identify key input factors, suitably overcome data scarcity, significantly increase model reliability, satisfactorily estimate site-specific TP concentration at seven monitoring stations simultaneously, and adequately reconstruct seasonal TP data into a monthly scale. The proposed SMS can reliably model the dynamic spatio-temporal water pollution variation in a river system for missing, hazardous or costly data of interest. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Biological Inspired Stochastic Optimization Technique (PSO for DOA and Amplitude Estimation of Antenna Arrays Signal Processing in RADAR Communication System

    Directory of Open Access Journals (Sweden)

    Khurram Hammed

    2016-01-01

    Full Text Available This paper presents a stochastic global optimization technique known as Particle Swarm Optimization (PSO for joint estimation of amplitude and direction of arrival of the targets in RADAR communication system. The proposed scheme is an excellent optimization methodology and a promising approach for solving the DOA problems in communication systems. Moreover, PSO is quite suitable for real time scenario and easy to implement in hardware. In this study, uniform linear array is used and targets are supposed to be in far field of the arrays. Formulation of the fitness function is based on mean square error and this function requires a single snapshot to obtain the best possible solution. To check the accuracy of the algorithm, all of the results are taken by varying the number of antenna elements and targets. Finally, these results are compared with existing heuristic techniques to show the accuracy of PSO.

  18. Different techniques of excess 210Pb for sedimentation rate estimation in the Sarawak and Sabah coastal waters

    International Nuclear Information System (INIS)

    Zal Uyun Wan Mahmood; Zaharudin Ahmad; Abdul Kadir Ishak; Che Abdul Rahim Mohamed

    2010-01-01

    Sediment core samples were collected at eight stations in the Sarawak and Sabah coastal waters using a gravity box corer to estimate sedimentation rates based on the activity of excess 210 Pb. The sedimentation rates derived from four mathematical models of CIC, Shukla-CIC, CRS and ADE were generally shown in good agreement with similar or comparable value at all stations. However, based on statistical analysis of independent sample t-test indicated that Shukla-CIC model was the most accurate, reliable and suitable technique to determine the sedimentation rate in the study area. (author)

  19. Comparisons and Uncertainty in Fat and Adipose Tissue Estimation Techniques: The Northern Elephant Seal as a Case Study.

    Directory of Open Access Journals (Sweden)

    Lisa K Schwarz

    Full Text Available Fat mass and body condition are important metrics in bioenergetics and physiological studies. They can also link foraging success with demographic rates, making them key components of models that predict population-level outcomes of environmental change. Therefore, it is important to incorporate uncertainty in physiological indicators if results will lead to species management decisions. Maternal fat mass in elephant seals (Mirounga spp can predict reproductive rate and pup survival, but no one has quantified or identified the sources of uncertainty for the two fat mass estimation techniques (labeled-water and truncated cones. The current cones method can provide estimates of proportion adipose tissue in adult females and proportion fat of juveniles in northern elephant seals (M. angustirostris comparable to labeled-water methods, but it does not work for all cases or species. We reviewed components and assumptions of the technique via measurements of seven early-molt and seven late-molt adult females. We show that seals are elliptical on land, rather than the assumed circular shape, and skin may account for a high proportion of what is often defined as blubber. Also, blubber extends past the neck-to-pelvis region, and comparisons of new and old ultrasound instrumentation indicate previous measurements of sculp thickness may be biased low. Accounting for such differences, and incorporating new measurements of blubber density and proportion of fat in blubber, we propose a modified cones method that can isolate blubber from non-blubber adipose tissue and separate fat into skin, blubber, and core compartments. Lastly, we found that adipose tissue and fat estimates using tritiated water may be biased high during the early molt. Both the tritiated water and modified cones methods had high, but reducible, uncertainty. The improved cones method for estimating body condition allows for more accurate quantification of the various tissue masses and may

  20. Life management of Zr 2.5% Nb pressure tube through estimation of fracture properties by cyclic ball indentation technique

    International Nuclear Information System (INIS)

    Chatterjee, S.; Madhusoodanan, K.; Rama Rao, A.

    2015-01-01

    In Pressurised Heavy Water Reactors (PHWRs) fuel bundles are located inside horizontal pressure tubes. Pressure tubes made up of Zr 2.5 wt% Nb undergo degradation during in-service environmental conditions. Measurement of mechanical properties of degraded pressure tubes is important for assessing its fitness for further service in the reactor. The only way to accomplish this important objective is to develop a system based on insitu measurement technique. Considering the importance of such measurement, an In-situ Property Measurement System (IProMS) based on cyclic ball indentation technique has been designed and developed indigenously. The remotely operable system is capable of carrying out indentation trial on the inside surface of the pressure tube and to estimate important mechanical properties like yield strength, ultimate tensile strength, hardness etc. It is known that fracture toughness is one of the important life limiting parameters of the pressure tube. Hence, five spool pieces of Zr 2.5 wt% Nb pressure tube of different mechanical properties have been used for estimation of fracture toughness by ball indentation method. Curved Compact Tension (CCT) specimens were also prepared from the five spool pieces for measurement of fracture toughness from conventional tests. The conventional fracture toughness values were used as reference data. A methodology has been developed to estimate the fracture properties of Zr 2.5 wt% Nb pressure tube material from the analysis of the ball indentation test data. This paper highlights the comparison between tensile properties measured from conventional tests and IProMS trials and relates the fracture toughness parameters measured from conventional tests with the IProMS estimated fracture properties like Indentation Energy to Fracture. (author)

  1. On advanced estimation techniques for exoplanet detection and characterization using ground-based coronagraphs

    Science.gov (United States)

    Lawson, Peter R.; Poyneer, Lisa; Barrett, Harrison; Frazin, Richard; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gładysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jérôme; Marois, Christian; Mawet, Dimitri; Mouillet, David; Mugnier, Laurent; Pearson, Iain; Perrin, Marshall; Pueyo, Laurent; Savransky, Dmitry

    2012-07-01

    The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We propose a formal comparison of techniques using a blind data challenge with an evaluation of performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012.

  2. A Comprehensive Review on Water Quality Parameters Estimation Using Remote Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Mohammad Haji Gholizadeh

    2016-08-01

    Full Text Available Remotely sensed data can reinforce the abilities of water resources researchers and decision makers to monitor waterbodies more effectively. Remote sensing techniques have been widely used to measure the qualitative parameters of waterbodies (i.e., suspended sediments, colored dissolved organic matter (CDOM, chlorophyll-a, and pollutants. A large number of different sensors on board various satellites and other platforms, such as airplanes, are currently used to measure the amount of radiation at different wavelengths reflected from the water’s surface. In this review paper, various properties (spectral, spatial and temporal, etc. of the more commonly employed spaceborne and airborne sensors are tabulated to be used as a sensor selection guide. Furthermore, this paper investigates the commonly used approaches and sensors employed in evaluating and quantifying the eleven water quality parameters. The parameters include: chlorophyll-a (chl-a, colored dissolved organic matters (CDOM, Secchi disk depth (SDD, turbidity, total suspended sediments (TSS, water temperature (WT, total phosphorus (TP, sea surface salinity (SSS, dissolved oxygen (DO, biochemical oxygen demand (BOD and chemical oxygen demand (COD.

  3. Estimation of fracture aperture using simulation technique; Simulation wo mochiita fracture kaiko haba no suitei

    Energy Technology Data Exchange (ETDEWEB)

    Kikuchi, T [Geological Survey of Japan, Tsukuba (Japan); Abe, M [Tohoku University, Sendai (Japan). Faculty of Engineering

    1996-10-01

    Characteristics of amplitude variation around fractures have been investigated using simulation technique in the case changing the fracture aperture. Four models were used. The model-1 was a fracture model having a horizontal fracture at Z=0. For the model-2, the fracture was replaced by a group of small fractures. The model-3 had an extended borehole diameter at Z=0 in a shape of wedge. The model-4 had a low velocity layer at Z=0. The maximum amplitude was compared each other for each depth and for each model. For the model-1, the amplitude became larger at the depth of the fracture, and became smaller above the fracture. For the model-2, when the cross width D increased to 4 cm, the amplitude approached to that of the model-1. For the model-3 having extended borehole diameter, when the extension of borehole diameter ranged between 1 cm and 2 cm, the change of amplitude was hardly observed above and below the fracture. However, when the extension of borehole diameter was 4 cm, the amplitude became smaller above the extension part of borehole. 3 refs., 4 figs., 1 tab.

  4. Soil Erosion Estimation Using Remote Sensing Techniques in Wadi Yalamlam Basin, Saudi Arabia

    Directory of Open Access Journals (Sweden)

    Jarbou A. Bahrawi

    2016-01-01

    Full Text Available Soil erosion is one of the major environmental problems in terms of soil degradation in Saudi Arabia. Soil erosion leads to significant on- and off-site impacts such as significant decrease in the productive capacity of the land and sedimentation. The key aspects influencing the quantity of soil erosion mainly rely on the vegetation cover, topography, soil type, and climate. This research studies the quantification of soil erosion under different levels of data availability in Wadi Yalamlam. Remote Sensing (RS and Geographic Information Systems (GIS techniques have been implemented for the assessment of the data, applying the Revised Universal Soil Loss Equation (RUSLE for the calculation of the risk of erosion. Thirty-four soil samples were randomly selected for the calculation of the erodibility factor, based on calculating the K-factor values derived from soil property surfaces after interpolating soil sampling points. Soil erosion risk map was reclassified into five erosion risk classes and 19.3% of the Wadi Yalamlam is under very severe risk (37,740 ha. GIS and RS proved to be powerful instruments for mapping soil erosion risk, providing sufficient tools for the analytical part of this research. The mapping results certified the role of RUSLE as a decision support tool.

  5. A helium-3 proportional counter technique for estimating fast and intermediate neutrons

    International Nuclear Information System (INIS)

    Kosako, Toshiso; Nakazawa, Masaharu; Sekiguchi, Akira; Wakabayashi, Hiroaki.

    1976-11-01

    3 He proportional counter was employed to determine the fast and intermediate neutron spectra of wide energy region. The mixed gas ( 3 He, Kr) type counter response and the spectrum unfolding code were prepared and applied to some neutron fields. The counter response calculation was performed by using the Monte Carlo code, paying regards to dealing of the particle range calculation of the mixed gas. An experiment was carried out by using the van de Graaff accelerator to check the response function. The spectrum unfolding code was prepared so that it may have the function of automatic evaluation of the higher energy spectrum's effect to the pulse hight distribution of the lower energy region. The neutron spectra of the various neutron fields were measured and compared with the calculations such as the discrete ordinate Sn calculations. It became clear that the technique developed here can be applied to the practical use in the neutron energy range from about 150 KeV to 5 MeV. (auth.)

  6. Improved seismic risk estimation for Bucharest, based on multiple hazard scenarios, analytical methods and new techniques

    Science.gov (United States)

    Toma-Danila, Dragos; Florinela Manea, Elena; Ortanza Cioflan, Carmen

    2014-05-01

    Bucharest, capital of Romania (with 1678000 inhabitants in 2011), is one of the most exposed big cities in Europe to seismic damage. The major earthquakes affecting the city have their origin in the Vrancea region. The Vrancea intermediate-depth source generates, statistically, 2-3 shocks with moment magnitude >7.0 per century. Although the focal distance is greater than 170 km, the historical records (from the 1838, 1894, 1908, 1940 and 1977 events) reveal severe effects in the Bucharest area, e.g. intensities IX (MSK) for the case of 1940 event. During the 1977 earthquake, 1420 people were killed and 33 large buildings collapsed. The nowadays building stock is vulnerable both due to construction (material, age) and soil conditions (high amplification, generated within the weak consolidated Quaternary deposits, their thickness is varying 250-500m throughout the city). A number of 373 old buildings, out of 2563, evaluated by experts are more likely to experience severe damage/collapse in the next major earthquake. The total number of residential buildings, in 2011, was 113900. In order to guide the mitigation measures, different studies tried to estimate the seismic risk of Bucharest, in terms of buildings, population or economic damage probability. Unfortunately, most of them were based on incomplete sets of data, whether regarding the hazard or the building stock in detail. However, during the DACEA Project, the National Institute for Earth Physics, together with the Technical University of Civil Engineering Bucharest and NORSAR Institute managed to compile a database for buildings in southern Romania (according to the 1999 census), with 48 associated capacity and fragility curves. Until now, the developed real-time estimation system was not implemented for Bucharest. This paper presents more than an adaptation of this system to Bucharest; first, we analyze the previous seismic risk studies, from a SWOT perspective. This reveals that most of the studies don't use

  7. A voxel-based technique to estimate the volume of trees from terrestrial laser scanner data

    Science.gov (United States)

    Bienert, A.; Hess, C.; Maas, H.-G.; von Oheimb, G.

    2014-06-01

    The precise determination of the volume of standing trees is very important for ecological and economical considerations in forestry. If terrestrial laser scanner data are available, a simple approach for volume determination is given by allocating points into a voxel structure and subsequently counting the filled voxels. Generally, this method will overestimate the volume. The paper presents an improved algorithm to estimate the wood volume of trees using a voxel-based method which will correct for the overestimation. After voxel space transformation, each voxel which contains points is reduced to the volume of its surrounding bounding box. In a next step, occluded (inner stem) voxels are identified by a neighbourhood analysis sweeping in the X and Y direction of each filled voxel. Finally, the wood volume of the tree is composed by the sum of the bounding box volumes of the outer voxels and the volume of all occluded inner voxels. Scan data sets from several young Norway maple trees (Acer platanoides) were used to analyse the algorithm. Therefore, the scanned trees as well as their representing point clouds were separated in different components (stem, branches) to make a meaningful comparison. Two reference measurements were performed for validation: A direct wood volume measurement by placing the tree components into a water tank, and a frustum calculation of small trunk segments by measuring the radii along the trunk. Overall, the results show slightly underestimated volumes (-0.3% for a probe of 13 trees) with a RMSE of 11.6% for the individual tree volume calculated with the new approach.

  8. A preliminary study on sedimentation rate in Tasek Bera Lake estimated using Pb-210 dating technique

    International Nuclear Information System (INIS)

    Wan Zakaria Wan Muhamad Tahir; Johari Abdul Latif; Juhari Mohd Yusof; Kamaruzaman Mamat; Gharibreza, M.R.

    2010-01-01

    Tasek Bera is the largest natural lake system (60 ha) in Malaysia located in southwest Pahang. The lake is a complex dendritic system consisting of extensive peat-swamp forests. The catchment was originally lowland dipterocarp forest, but this has nearly over the past four decades been largely replaced with oil palm and rubber plantations developed by the Federal Land Development Authority (FELDA). Besides the environmentally importance of Tasek Bera, it is seriously subjected to erosion, sedimentation and morphological changes. Knowledge and information of accurate sedimentation rate and its causes are of utmost importance for appropriate management of lakes and future planning. In the present study, environmental 210 Pb (natural) dating technique was applied to determine sedimentation rate and pattern as well as the chronology of sediment deposit in Tasek Bera Lake. Three undisturbed core samples from different locations at the main entry and exit points of river mouth and in open water within the lake were collected during a field sampling campaign in October 2009 and analyzed for 210 Pb using gamma spectrometry method. Undisturbed sediments are classified as organic soils to peat with clayey texture that composed of 93 % clay, 5 % silt, and 2 % very fine sand. Comparatively higher sedimentation rates in the entry (0.06-1.58 cm/ yr) and exit (0.05-1.55 cm/ yr) points of the main river mouth as compared to the lakes open water (0.02- 0.74 cm/ yr) were noticed. Reasons for the different pattern of sedimentation rates in this lake and conclusion are discussed in this paper. (author)

  9. Estimating rumen microbial protein supply for indigenous ruminants using nuclear and purine excretion techniques in Indonesia

    International Nuclear Information System (INIS)

    Soejono, M.; Yusiati, L.M.; Budhi, S.P.S.; Widyobroto, B.P.; Bachrudin, Z.

    1999-01-01

    The microbial protein supply to ruminants can be estimated based on the amount of purine derivatives (PD) excreted in the urine. Four experiments were conducted to evaluate the PD excretion method for Bali and Ongole cattle. In the first experiment, six male, two year old Bali cattle (Bos sondaicus) and six Ongole cattle (Bos indicus) of similar sex and age, were used to quantify the endogenous contribution to total PD excretion in the urine. In the second experiment, four cattle from each breed were used to examine the response of PD excretion to feed intake. 14 C-uric acid was injected in one single dose to define the partitioning ratio of renal:non-renal losses of plasma PD. The third experiment was conducted to examine the ratio of purine N:total N in mixed rumen microbial population. The fourth experiment measured the enzyme activities of blood, liver and intestinal tissues concerned with PD metabolism. The results of the first experiment showed that endogenous PD excretion was 145 ± 42.0 and 132 ± 20.0 μmol/kg W 0.75 /d, for Bali and Ongole cattle, respectively. The second experiment indicated that the proportion of plasma PD excreted in the urine of Bali and Ongole cattle was 0.78 and 0.77 respectively. Hence, the prediction of purine absorbed based on PD excretion can be stated as Y = 0.78 X + 0.145 W 0.75 and Y = 0.77 X + 0.132 W 0.75 for Bali and Ongole cattle, respectively. The third experiment showed that there were no differences in the ratio of purine N:total N in mixed rumen microbes of Bali and Ongole cattle (17% vs 18%). The last experiment, showed that intestinal xanthine oxidase activity of Bali cattle was lower than that of Ongole cattle (0.001 vs 0.015 μmol uric acid produced/min/g tissue) but xanthine oxidase activity in the blood and liver of Bali cattle was higher than that of Ongole cattle (3.48 vs 1.34 μmol/min/L plasma and 0.191 vs 0.131 μmol/min/g liver tissue). Thus, there was no difference in PD excretion between these two breeds

  10. Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.

    Science.gov (United States)

    Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke

    2013-04-01

    temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules

  11. Exploration of deep S-wave velocity structure using microtremor array technique to estimate long-period ground motion

    International Nuclear Information System (INIS)

    Sato, Hiroaki; Higashi, Sadanori; Sato, Kiyotaka

    2007-01-01

    In this study, microtremor array measurements were conducted at 9 sites in the Niigata plain to explore deep S-wave velocity structures for estimation of long-period earthquake ground motion. The 1D S-wave velocity profiles in the Niigata plain are characterized by 5 layers with S-wave velocities of 0.4, 0.8, 1.5, 2.1 and 3.0 km/s, respectively. The depth to the basement layer is deeper in the Niigata port area located at the Japan sea side of the Niigata plain. In this area, the basement depth is about 4.8 km around the Seirou town and about 4.1 km around the Niigata city, respectively. These features about the basement depth in the Niigata plain are consistent with the previous surveys. In order to verify the profiles derived from microtremor array exploration, we estimate the group velocities of Love wave for four propagation paths of long-period earthquake ground motion during Niigata-ken tyuetsu earthquake by multiple filter technique, which were compared with the theoretical ones calculated from the derived profiles. As a result, it was confirmed that the group velocities from the derived profiles were in good agreement with the ones from long-period earthquake ground motion records during Niigata-ken tyuetsu earthquake. Furthermore, we applied the estimation method of design basis earthquake input for seismically isolated nuclear power facilities by using normal mode solution to estimate long-period earthquake ground motion during Niigata-ken tyuetsu earthquake. As a result, it was demonstrated that the applicability of the above method for the estimation of long-period earthquake ground motion were improved by using the derived 1D S-wave velocity profile. (author)

  12. The suitability of EIT to estimate EELV in a clinical trial compared to oxygen wash-in/wash-out technique.

    Science.gov (United States)

    Karsten, Jan; Meier, Torsten; Iblher, Peter; Schindler, Angela; Paarmann, Hauke; Heinze, Hermann

    2014-02-01

    Open endotracheal suctioning procedure (OSP) and recruitment manoeuvre (RM) are known to induce severe alterations of end-expiratory lung volume (EELV). We hypothesised that EIT lung volumes lack clinical validity. We studied the suitability of EIT to estimate EELV compared to oxygen wash-in/wash-out technique. Fifty-four postoperative cardiac surgery patients were enrolled and received standardized ventilation and OSP. Patients were randomized into two groups receiving either RM after suctioning (group RM) or no RM (group NRM). Measurements were conducted at the following time points: Baseline (T1), after suctioning (T2), after RM or NRM (T3), and 15 and 30 min after T3 (T4 and T5). We measured EELV using the oxygen wash-in/wash-out technique (EELVO2) and computed EELV from EIT (EELVEIT) by the following formula: EELVEITTx,y…=EELVO2+ΔEELI×VT/ΔZ. EELVEIT values were compared with EELVO2 using Bland-Altman analysis and Pearson correlation. Limits of agreement ranged from -0.83 to 1.31 l. Pearson correlation revealed significant results. There was no significant impact of RM or NRM on EELVO2-EELVEIT relationship (p=0.21; p=0.23). During typical routine respiratory manoeuvres like endotracheal suctioning or alveolar recruitment, EELV cannot be estimated by EIT with reasonable accuracy.

  13. Estimates of error introduced when one-dimensional inverse heat transfer techniques are applied to multi-dimensional problems

    International Nuclear Information System (INIS)

    Lopez, C.; Koski, J.A.; Razani, A.

    2000-01-01

    A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively

  14. Application of PSO (particle swarm optimization) and GA (genetic algorithm) techniques on demand estimation of oil in Iran

    International Nuclear Information System (INIS)

    Assareh, E.; Behrang, M.A.; Assari, M.R.; Ghanbarzadeh, A.

    2010-01-01

    This paper presents application of PSO (Particle Swarm Optimization) and GA (Genetic Algorithm) techniques to estimate oil demand in Iran, based on socio-economic indicators. The models are developed in two forms (exponential and linear) and applied to forecast oil demand in Iran. PSO-DEM and GA-DEM (PSO and GA demand estimation models) are developed to estimate the future oil demand values based on population, GDP (gross domestic product), import and export data. Oil consumption in Iran from 1981 to 2005 is considered as the case of this study. The available data is partly used for finding the optimal, or near optimal values of the weighting parameters (1981-1999) and partly for testing the models (2000-2005). For the best results of GA, the average relative errors on testing data were 2.83% and 1.72% for GA-DEM exponential and GA-DEM linear , respectively. The corresponding values for PSO were 1.40% and 1.36% for PSO-DEM exponential and PSO-DEM linear , respectively. Oil demand in Iran is forecasted up to year 2030. (author)

  15. Comparison of techniques for estimating PAH bioavailability: Uptake in Eisenia fetida, passive samplers and leaching using various solvents and additives

    Energy Technology Data Exchange (ETDEWEB)

    Bergknut, Magnus [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden)]. E-mail: magnus.bergknut@chem.umu.se; Sehlin, Emma [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden); Lundstedt, Staffan [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden); Andersson, Patrik L. [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden); Haglund, Peter [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden); Tysklind, Mats [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden)

    2007-01-15

    The aim of this study was to evaluate different techniques for assessing the availability of polycyclic aromatic hydrocarbons (PAHs) in soil. This was done by comparing the amounts (total and relative) taken up by the earthworm Eisenia fetida with the amounts extracted by solid-phase microextraction (SPME), semi-permeable membrane devices (SPMDs), leaching with various solvent mixtures, leaching using additives, and sequential leaching. Bioconcentration factors of PAHs in the earthworms based on equilibrium partitioning theory resulted in poor correlations to observed values. This was most notable for PAHs with high concentrations in the studied soil. Evaluation by principal component analysis (PCA) showed distinct differences between the evaluated techniques and, generally, there were larger proportions of carcinogenic PAHs (4-6 fused rings) in the earthworms. These results suggest that it may be difficult to develop a chemical method that is capable of mimicking biological uptake, and thus estimating the bioavailability of PAHs. - The total and relative amounts of PAHs extracted by abiotic techniques for assessing the bioavailability of PAHs was found to differ from the amounts taken up by Eisenia fetida.

  16. Comparison of techniques for estimating PAH bioavailability: Uptake in Eisenia fetida, passive samplers and leaching using various solvents and additives

    International Nuclear Information System (INIS)

    Bergknut, Magnus; Sehlin, Emma; Lundstedt, Staffan; Andersson, Patrik L.; Haglund, Peter; Tysklind, Mats

    2007-01-01

    The aim of this study was to evaluate different techniques for assessing the availability of polycyclic aromatic hydrocarbons (PAHs) in soil. This was done by comparing the amounts (total and relative) taken up by the earthworm Eisenia fetida with the amounts extracted by solid-phase microextraction (SPME), semi-permeable membrane devices (SPMDs), leaching with various solvent mixtures, leaching using additives, and sequential leaching. Bioconcentration factors of PAHs in the earthworms based on equilibrium partitioning theory resulted in poor correlations to observed values. This was most notable for PAHs with high concentrations in the studied soil. Evaluation by principal component analysis (PCA) showed distinct differences between the evaluated techniques and, generally, there were larger proportions of carcinogenic PAHs (4-6 fused rings) in the earthworms. These results suggest that it may be difficult to develop a chemical method that is capable of mimicking biological uptake, and thus estimating the bioavailability of PAHs. - The total and relative amounts of PAHs extracted by abiotic techniques for assessing the bioavailability of PAHs was found to differ from the amounts taken up by Eisenia fetida

  17. Accuracy and feasibility of estimated tumour volumetry in primary gastric gastrointestinal stromal tumours: validation using semiautomated technique in 127 patients.

    Science.gov (United States)

    Tirumani, Sree Harsha; Shinagare, Atul B; O'Neill, Ailbhe C; Nishino, Mizuki; Rosenthal, Michael H; Ramaiya, Nikhil H

    2016-01-01

    To validate estimated tumour volumetry in primary gastric gastrointestinal stromal tumours (GISTs) using semiautomated volumetry. In this IRB-approved retrospective study, we measured the three longest diameters in x, y, z axes on CTs of primary gastric GISTs in 127 consecutive patients (52 women, 75 men, mean age 61 years) at our institute between 2000 and 2013. Segmented volumes (Vsegmented) were obtained using commercial software by two radiologists. Estimate volumes (V1-V6) were obtained using formulae for spheres and ellipsoids. Intra- and interobserver agreement of Vsegmented and agreement of V1-6 with Vsegmented were analysed with concordance correlation coefficients (CCC) and Bland-Altman plots. Median Vsegmented and V1-V6 were 75.9, 124.9, 111.6, 94.0, 94.4, 61.7 and 80.3 cm(3), respectively. There was strong intra- and interobserver agreement for Vsegmented. Agreement with Vsegmented was highest for V6 (scalene ellipsoid, x ≠ y ≠ z), with CCC of 0.96 [95 % CI 0.95-0.97]. Mean relative difference was smallest for V6 (0.6 %), while it was -19.1 % for V5, +14.5 % for V4, +17.9 % for V3, +32.6 % for V2 and +47 % for V1. Ellipsoidal approximations of volume using three measured axes may be used to closely estimate Vsegmented when semiautomated techniques are unavailable. Estimation of tumour volume in primary GIST using mathematical formulae is feasible. Gastric GISTs are rarely spherical. Segmented volumes are highly concordant with three axis-based scalene ellipsoid volumes. Ellipsoid volume can be used as an alternative for automated tumour volumetry.

  18. Risk estimation in association with diagnostic techniques in the nuclear medicine service of the Camaguey Ciego de Avila Territory

    International Nuclear Information System (INIS)

    Barrerras, C.A.; Brigido, F.O.; Naranjo, L.A.; Lasserra, S.O.; Hernandez Garcia, J.

    1999-01-01

    The nuclear medicine service at the Maria Curie Oncological Hospital, Camaguey, has experience of over three decades in using radiofarmaceutical imaging agents for diagnosis. Although the clinical risk associated with these techniques is negligible, it is necessary to evaluate the effective dose administered to the patient due to the introduction of radioactive substances into the body. The study of the dose administered to the patient provides useful data for evaluating the detriment associated with this medical practice, its subsequently optimization and consequently, for minimizing the stochastic effects on the patient. The aim of our paper is to study the collective effective dose administered by nuclear medicine service to Camaguey and Ciego de Avila population from 1995 to 1998 and the relative contribution to the total annual effective collective dose of the different diagnostic examinations. The studies were conducted on the basis of statistics from nuclear medicine examinations given to a population of 1102353 inhabitants since 1995. The results show that the nuclear medicine techniques of neck examinations with 1168.8 Sv man (1.11 Sv/expl), thyroid explorations with 119.6 Sv man (55.5 mSv/expl) and iodide uptake with 113.7 Sv man (14.0 mSv/expl) are the main techniques implicated in the relative contribution to the total annual effective collective dose of 1419.5 Sv man. The risk estimation in association with diagnostic techniques in the nuclear medicine service studied is globally low (total detriment: 103.6 as a result of 16232 explorations), similar to other published data

  19. Techniques for Estimating Emissions Factors from Forest Burning: ARCTAS and SEAC4RS Airborne Measurements Indicate which Fires Produce Ozone

    Science.gov (United States)

    Chatfield, Robert B.; Andreae, Meinrat O.

    2016-01-01

    Previous studies of emission factors from biomass burning are prone to large errors since they ignore the interplay of mixing and varying pre-fire background CO2 levels. Such complications severely affected our studies of 446 forest fire plume samples measured in the Western US by the science teams of NASA's SEAC4RS and ARCTAS airborne missions. Consequently we propose a Mixed Effects Regression Emission Technique (MERET) to check techniques like the Normalized Emission Ratio Method (NERM), where use of sequential observations cannot disentangle emissions and mixing. We also evaluate a simpler "consensus" technique. All techniques relate emissions to fuel burned using C(burn) = delta C(tot) added to the fire plume, where C(tot) approximately equals (CO2 = CO). Mixed-effects regression can estimate pre-fire background values of C(tot) (indexed by observation j) simultaneously with emissions factors indexed by individual species i, delta, epsilon lambda tau alpha-x(sub I)/C(sub burn))I,j. MERET and "consensus" require more than emissions indicators. Our studies excluded samples where exogenous CO or CH4 might have been fed into a fire plume, mimicking emission. We sought to let the data on 13 gases and particulate properties suggest clusters of variables and plume types, using non-negative matrix factorization (NMF). While samples were mixtures, the NMF unmixing suggested purer burn types. Particulate properties (b scant, b abs, SSA, AAE) and gas-phase emissions were interrelated. Finally, we sought a simple categorization useful for modeling ozone production in plumes. Two kinds of fires produced high ozone: those with large fuel nitrogen as evidenced by remnant CH3CN in the plumes, and also those from very intense large burns. Fire types with optimal ratios of delta-NOy/delta- HCHO associate with the highest additional ozone per unit Cburn, Perhaps these plumes exhibit limited NOx binding to reactive organics. Perhaps these plumes exhibit limited NOx binding to

  20. A rapid technique for estimating the depth and width of a two-dimensional plate from self-potential data

    International Nuclear Information System (INIS)

    Mehanee, Salah; Smith, Paul D; Essa, Khalid S

    2011-01-01

    Rapid techniques for self-potential (SP) data interpretation are of prime importance in engineering and exploration geophysics. Parameters (e.g. depth, width) estimation of the ore bodies has also been of paramount concern in mineral prospecting. In many cases, it is useful to assume that the SP anomaly is due to an ore body of simple geometric shape and to use the data to determine its parameters. In light of this, we describe a rapid approach to determine the depth and horizontal width of a two-dimensional plate from the SP anomaly. The rationale behind the scheme proposed in this paper is that, unlike the two- (2D) and three-dimensional (3D) SP rigorous source current inversions, it does not demand a priori information about the subsurface resistivity distribution nor high computational resources. We apply the second-order moving average operator on the SP anomaly to remove the unwanted (regional) effect, represented by up to a third-order polynomial, using filters of successive window lengths. By defining a function F at a fixed window length (s) in terms of the filtered anomaly computed at two points symmetrically distributed about the origin point of the causative body, the depth (z) corresponding to each half-width (w) is estimated by solving a nonlinear equation in the form ξ(s, w, z) = 0. The estimated depths are then plotted against their corresponding half-widths on a graph representing a continuous curve for this window length. This procedure is then repeated for each available window length. The depth and half-width solution of the buried structure is read at the common intersection of these various curves. The improvement of this method over the published first-order moving average technique for SP data is demonstrated on a synthetic data set. It is then verified on noisy synthetic data, complicated structures and successfully applied to three field examples for mineral exploration and we have found that the estimated depth is in good agreement with

  1. The use of thermovision technique to estimate the properties of highly filled polyolefins composites with calcium carbonate

    Energy Technology Data Exchange (ETDEWEB)

    Jakubowska, Paulina; Klozinski, Arkadiusz [Poznan University of Technology, Institute of Technology and Chemical Engineering, Polymer Division Pl. M. Sklodowskiej-Curie 2, 60-965 Poznan, Poland, Paulina.Jakubowska@put.poznan.pl (Poland)

    2015-05-22

    The aim of this work was to determine the possibility of thermovision technique usage for estimating thermal properties of ternary highly filled composites (PE-MD/iPP/CaCO{sub 3}) and polymer blends (PE-MD/iPP) during mechanical measurements. The ternary, polyolefin based composites that contained the following amounts of calcium carbonate: 48, 56, and 64 wt % were studied. All materials were applying under tensile cyclic loads (x1, x5, x10, x20, x50, x100, x500, x1000). Simultaneously, a fully radiometric recording, using a TESTO infrared camera, was created. After the fatigue process, all samples were subjected to static tensile test and the maximum temperature at break was also recorded. The temperature values were analyzed in a function of cyclic loads and the filler content. The changes in the Young’s modulus values were also investigated.

  2. Stereological estimates of nuclear volume and other quantitative variables in supratentorial brain tumors. Practical technique and use in prognostic evaluation

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Braendgaard, H; Chistiansen, A O

    1991-01-01

    The use of morphometry and modern stereology in malignancy grading of brain tumors is only poorly investigated. The aim of this study was to present these quantitative methods. A retrospective feasibility study of 46 patients with supratentorial brain tumors was carried out to demonstrate...... the practical technique. The continuous variables were correlated with the subjective, qualitative WHO classification of brain tumors, and the prognostic value of the parameters was assessed. Well differentiated astrocytomas (n = 14) had smaller estimates of the volume-weighted mean nuclear volume and mean...... nuclear profile area, than those of anaplastic astrocytomas (n = 13) (2p = 3.1.10(-3) and 2p = 4.8.10(-3), respectively). No differences were seen between the latter type of tumor and glioblastomas (n = 19). The nuclear index was of the same magnitude in all three tumor types, whereas the mitotic index...

  3. Evaluation of the Repeatability of the Delta Q Duct Leakage Testing TechniqueIncluding Investigation of Robust Analysis Techniques and Estimates of Weather Induced Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Dickerhoff, Darryl; Walker, Iain

    2008-08-01

    The DeltaQ test is a method of estimating the air leakage from forced air duct systems. Developed primarily for residential and small commercial applications it uses the changes in blower door test results due to forced air system operation. Previous studies established the principles behind DeltaQ testing, but raised issues of precision of the test, particularly for leaky homes on windy days. Details of the measurement technique are available in an ASTM Standard (ASTM E1554-2007). In order to ease adoption of the test method, this study answers questions regarding the uncertainty due to changing weather during the test (particularly changes in wind speed) and the applicability to low leakage systems. The first question arises because the building envelope air flows and pressures used in the DeltaQ test are influenced by weather induced pressures. Variability in wind induced pressures rather than temperature difference induced pressures dominates this effect because the wind pressures change rapidly over the time period of a test. The second question needs to answered so that DeltaQ testing can be used in programs requiring or giving credit for tight ducts (e.g., California's Building Energy Code (CEC 2005)). DeltaQ modeling biases have been previously investigated in laboratory studies where there was no weather induced changes in envelope flows and pressures. Laboratory work by Andrews (2002) and Walker et al. (2004) found biases of about 0.5% of forced air system blower flow and individual test uncertainty of about 2% of forced air system blower flow. The laboratory tests were repeated by Walker and Dickerhoff (2006 and 2008) using a new ramping technique that continuously varied envelope pressures and air flows rather than taking data at pre-selected pressure stations (as used in ASTM E1554-2003 and other previous studies). The biases and individual test uncertainties for ramping were found to be very close (less than 0.5% of air handler flow) to those

  4. Inverse estimation of the spheroidal particle size distribution using Ant Colony Optimization algorithms in multispectral extinction technique

    Science.gov (United States)

    He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming

    2014-10-01

    Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.

  5. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    Science.gov (United States)

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-04-07

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  6. Estimates of soil erosion and deposition of cultivated soil of Nakhla watershed, Morocco, using 137Cs technique and calibration models

    International Nuclear Information System (INIS)

    Bouhlassa, S.; Moukhchane, M.; Aiachi, A.

    2000-01-01

    Despite the effective threat of erosion, for soil preservation and productivity in Morocco, there is still only limited information on rates of soil loss involved. This study is aimed to establish long-term erosion rates on cultivated land in the Nakhla watershed located in the north of the country, using 137 Cs technique. Two sampling strategies were adopted. The first is aimed at establishing areal estimates of erosion, whereas the second, based on a transect approach, intends to determine point erosion. Twenty-one cultivated sites and seven undisturbed sites apparently not affected by erosion or deposition were sampled to 35 cm depth. Nine cores were collected along the transect of 149 m length. The assessment of erosion rates with models varying in complexity from the simple Proportional Model to more complex Mass Balance Models which attempts to include the processes controlling the redistribution of 137 Cs in soil, enables us to demonstrate the significance of soil erosion problem on cultivated land. Erosion rates rises up to 50 t ha -1 yr -1 . The 137 Cs derived erosion rates provide a reliable representation of water erosion pattern in the area, and indicate the importance of tillage process on the redistribution of 137 Cs in soil. For aggrading sites a Constant Rate Supply (CRS) Model had been adapted and introduced to estimate easily the depositional rate. (author) [fr

  7. Development of electrical efficiency measurement techniques for 10 kW-class SOFC system: Part II. Uncertainty estimation

    International Nuclear Information System (INIS)

    Tanaka, Yohei; Momma, Akihiko; Kato, Ken; Negishi, Akira; Takano, Kiyonami; Nozaki, Ken; Kato, Tohru

    2009-01-01

    Uncertainty of electrical efficiency measurement was investigated for a 10 kW-class SOFC system using town gas. Uncertainty of heating value measured by the gas chromatography method on a mole base was estimated as ±0.12% at 95% level of confidence. Micro-gas chromatography with/without CH 4 quantification may be able to reduce uncertainty of measurement. Calibration and uncertainty estimation methods are proposed for flow-rate measurement of town gas with thermal mass-flow meters or controllers. By adequate calibrations for flowmeters, flow rate of town gas or natural gas at 35 standard litters per minute can be measured within relative uncertainty ±1.0% at 95 % level of confidence. Uncertainty of power measurement can be as low as ±0.14% when a precise wattmeter is used and calibrated properly. It is clarified that electrical efficiency for non-pressurized 10 kW-class SOFC systems can be measured within ±1.0% relative uncertainty at 95% level of confidence with the developed techniques when the SOFC systems are operated relatively stably

  8. Reliable and Damage-Free Estimation of Resistivity of ZnO Thin Films for Photovoltaic Applications Using Photoluminescence Technique

    Directory of Open Access Journals (Sweden)

    N. Poornima

    2013-01-01

    Full Text Available This work projects photoluminescence (PL as an alternative technique to estimate the order of resistivity of zinc oxide (ZnO thin films. ZnO thin films, deposited using chemical spray pyrolysis (CSP by varying the deposition parameters like solvent, spray rate, pH of precursor, and so forth, have been used for this study. Variation in the deposition conditions has tremendous impact on the luminescence properties as well as resistivity. Two emissions could be recorded for all samples—the near band edge emission (NBE at 380 nm and the deep level emission (DLE at ~500 nm which are competing in nature. It is observed that the ratio of intensities of DLE to NBE (/ can be reduced by controlling oxygen incorporation in the sample. - measurements indicate that restricting oxygen incorporation reduces resistivity considerably. Variation of / and resistivity for samples prepared under different deposition conditions is similar in nature. / was always less than resistivity by an order for all samples. Thus from PL measurements alone, the order of resistivity of the samples can be estimated.

  9. Spatial and temporal single-cell volume estimation by a fluorescence imaging technique with application to astrocytes in primary culture

    Science.gov (United States)

    Khatibi, Siamak; Allansson, Louise; Gustavsson, Tomas; Blomstrand, Fredrik; Hansson, Elisabeth; Olsson, Torsten

    1999-05-01

    Cell volume changes are often associated with important physiological and pathological processes in the cell. These changes may be the means by which the cell interacts with its surrounding. Astroglial cells change their volume and shape under several circumstances that affect the central nervous system. Following an incidence of brain damage, such as a stroke or a traumatic brain injury, one of the first events seen is swelling of the astroglial cells. In order to study this and other similar phenomena, it is desirable to develop technical instrumentation and analysis methods capable of detecting and characterizing dynamic cell shape changes in a quantitative and robust way. We have developed a technique to monitor and to quantify the spatial and temporal volume changes in a single cell in primary culture. The technique is based on two- and three-dimensional fluorescence imaging. The temporal information is obtained from a sequence of microscope images, which are analyzed in real time. The spatial data is collected in a sequence of images from the microscope, which is automatically focused up and down through the specimen. The analysis of spatial data is performed off-line and consists of photobleaching compensation, focus restoration, filtering, segmentation and spatial volume estimation.

  10. Validation of myocardial blood flow estimation with nitrogen-13 ammonia PET by the argon inert gas technique in humans

    International Nuclear Information System (INIS)

    Kotzerke, J.; Glatting, G.; Neumaier, B.; Reske, S.N.; Hoff, J. van den; Hoeher, M.; Woehrle, J. n

    2001-01-01

    We simultaneously determined global myocardial blood flow (MBF) by the argon inert gas technique and by nitrogen-13 ammonia positron emission tomography (PET) to validate PET-derived MBF values in humans. A total of 19 patients were investigated at rest (n=19) and during adenosine-induced hyperaemia (n=16). Regional coronary artery stenoses were ruled out by angiography. The argon inert gas method uses the difference of arterial and coronary sinus argon concentrations during inhalation of a mixture of 75% argon and 25% oxygen to estimate global MBF. It can be considered as valid as the microspheres technique, which, however, cannot be applied in humans. Dynamic PET was performed after injection of 0.8±0.2 GBq 13 N-ammonia and MBF was calculated applying a two-tissue compartment model. MBF values derived from the argon method at rest and during the hyperaemic state were 1.03±0.24 ml min -1 g -1 and 2.64±1.02 ml min -1 g -1 , respectively. MBF values derived from ammonia PET at rest and during hyperaemia were 0.95±0.23 ml min -1 g -1 and 2.44±0.81 ml min -1 g -1 , respectively. The correlation between the two methods was close (y=0.92x+0.14, r=0.96; P 13 N-ammonia PET. (orig.)

  11. Estimating distribution and connectivity of recolonizing American marten in the northeastern United States using expert elicitation techniques

    Science.gov (United States)

    Aylward, C.M.; Murdoch, J.D.; Donovan, Therese M.; Kilpatrick, C.W.; Bernier, C.; Katz, J.

    2018-01-01

    The American marten Martes americana is a species of conservation concern in the northeastern United States due to widespread declines from over‐harvesting and habitat loss. Little information exists on current marten distribution and how landscape characteristics shape patterns of occupancy across the region, which could help develop effective recovery strategies. The rarity of marten and lack of historical distribution records are also problematic for region‐wide conservation planning. Expert opinion can provide a source of information for estimating species–landscape relationships and is especially useful when empirical data are sparse. We created a survey to elicit expert opinion and build a model that describes marten occupancy in the northeastern United States as a function of landscape conditions. We elicited opinions from 18 marten experts that included wildlife managers, trappers and researchers. Each expert estimated occupancy probability at 30 sites in their geographic region of expertise. We, then, fit the response data with a set of 58 models that incorporated the effects of covariates related to forest characteristics, climate, anthropogenic impacts and competition at two spatial scales (1.5 and 5 km radii), and used model selection techniques to determine the best model in the set. Three top models had strong empirical support, which we model averaged based on AIC weights. The final model included effects of five covariates at the 5‐km scale: percent canopy cover (positive), percent spruce‐fir land cover (positive), winter temperature (negative), elevation (positive) and road density (negative). A receiver operating characteristic curve indicated that the model performed well based on recent occurrence records. We mapped distribution across the region and used circuit theory to estimate movement corridors between isolated core populations. The results demonstrate the effectiveness of expert‐opinion data at modeling occupancy for rare

  12. An automated technique to stage lower third molar development on panoramic radiographs for age estimation: a pilot study.

    Science.gov (United States)

    De Tobel, J; Radesh, P; Vandermeulen, D; Thevissen, P W

    2017-12-01

    Automated methods to evaluate growth of hand and wrist bones on radiographs and magnetic resonance imaging have been developed. They can be applied to estimate age in children and subadults. Automated methods require the software to (1) recognise the region of interest in the image(s), (2) evaluate the degree of development and (3) correlate this to the age of the subject based on a reference population. For age estimation based on third molars an automated method for step (1) has been presented for 3D magnetic resonance imaging and is currently being optimised (Unterpirker et al. 2015). To develop an automated method for step (2) based on lower third molars on panoramic radiographs. A modified Demirjian staging technique including ten developmental stages was developed. Twenty panoramic radiographs per stage per gender were retrospectively selected for FDI element 38. Two observers decided in consensus about the stages. When necessary, a third observer acted as a referee to establish the reference stage for the considered third molar. This set of radiographs was used as training data for machine learning algorithms for automated staging. First, image contrast settings were optimised to evaluate the third molar of interest and a rectangular bounding box was placed around it in a standardised way using Adobe Photoshop CC 2017 software. This bounding box indicated the region of interest for the next step. Second, several machine learning algorithms available in MATLAB R2017a software were applied for automated stage recognition. Third, the classification performance was evaluated in a 5-fold cross-validation scenario, using different validation metrics (accuracy, Rank-N recognition rate, mean absolute difference, linear kappa coefficient). Transfer Learning as a type of Deep Learning Convolutional Neural Network approach outperformed all other tested approaches. Mean accuracy equalled 0.51, mean absolute difference was 0.6 stages and mean linearly weighted kappa was

  13. Evaluation of the 137Cs technique for estimating wind erosion losses for some sandy Western Australian soils

    International Nuclear Information System (INIS)

    Harper, R.J.; Gilkes, R.J.

    1994-01-01

    The utility of the caesium-137 technique, for estimating the effects of wind erosion, was evaluated on the soils of a semi-arid agricultural area near Jerramungup, Western Australia. The past incidence of wind erosion was estimated from field observations of soil profile morphology and an existing remote sensing study. Erosion was limited to sandy surfaced soils (0-4% clay), with a highly significant difference (P 137 Cs values between eroded and non-eroded sandy soils, with mean values of 243±17 and 386±13 Bq m -2 respectively. Non-eroded soils, with larger clay contents, had a mean 137 Cs content of 421±26 Bq m -2 , however, due to considerable variation between replicate samples, this value was not significantly different from that of the non-eroded sands. Hence, although the technique discriminates between eroded and non-eroded areas, the large variation in 137 Cs values means that from 27 to 96 replicate samples are required to provide statistically valid estimates of 137 Cs loss. The occurrence of around 18% of the total 137 Cs between 10 and 20 cm depth in these soils, despite cultivation being confined to the surface 9 cm, suggests that leaching of 137 Cs occurs in the sandy soils, although there was no relationship between clay content and 137 Cs value for either eroded or non-eroded soils. In a multiple linear regression, organic carbon content and the mean grain size of the eroded soils explained 35% of the variation in 137 Cs content. This relationship suggests that both organic carbon and 137 Cs are removed by erosion, with erosion being more prevalent on soils with a finer sand fraction. Clay and silt contents do not vary with depth in the near-surface horizons of the eroded sandy soils, hence it is likely that wind erosion strips the entire surface horizon with its 137 Cs content, rather than selectively winnowing fine material. 71 refs., 6 tabs., 2 fig

  14. [Estimating child mortality using the previous child technique, with data from health centers and household surveys: methodological aspects].

    Science.gov (United States)

    Aguirre, A; Hill, A G

    1988-01-01

    2 trials of the previous child or preceding birth technique in Bamako, Mali, and Lima, Peru, gave very promising results for measurement of infant and early child mortality using data on survivorship of the 2 most recent births. In the Peruvian study, another technique was tested in which each woman was asked about her last 3 births. The preceding birth technique described by Brass and Macrae has rapidly been adopted as a simple means of estimating recent trends in early childhood mortality. The questions formulated and the analysis of results are direct when the mothers are visited at the time of birth or soon after. Several technical aspects of the method believed to introduce unforeseen biases have now been studied and found to be relatively unimportant. But the problems arising when the data come from a nonrepresentative fraction of the total fertile-aged population have not been resolved. The analysis based on data from 5 maternity centers including 1 hospital in Bamako, Mali, indicated some practical problems and the information obtained showed the kinds of subtle biases that can result from the effects of selection. The study in Lima tested 2 abbreviated methods for obtaining recent early childhood mortality estimates in countries with deficient vital registration. The basic idea was that a few simple questions added to household surveys on immunization or diarrheal disease control for example could produce improved child mortality estimates. The mortality estimates in Peru were based on 2 distinct sources of information in the questionnaire. All women were asked their total number of live born children and the number still alive at the time of the interview. The proportion of deaths was converted into a measure of child survival using a life table. Then each woman was asked for a brief history of the 3 most recent live births. Dates of birth and death were noted in month and year of occurrence. The interviews took only slightly longer than the basic survey

  15. Characterization of Pb(Zr, Ti)O3 thin films fabricated by plasma enhanced chemical vapor deposition on Ir-based electrodes

    International Nuclear Information System (INIS)

    Lee, Hee-Chul; Lee, Won-Jong

    2002-01-01

    Structural and electrical characteristics of Pb(Zr, Ti)O 3 (PZT) ferroelectric thin films deposited on various Ir-based electrodes (Ir, IrO 2 , and Pt/IrO 2 ) using electron cyclotron resonance plasma enhanced chemical vapor deposition were investigated. On the Ir electrode, stoichiometric PZT films with pure perovskite phase could be obtained over a very wide range of processing conditions. However, PZT films prepared on the IrO 2 electrode contain a large amount of PbO x phases and exhibited high Pb-excess composition. The deposition characteristics were dependent on the behavior of PbO molecules on the electrode surface. The PZT thin film capacitors prepared on the Ir bottom electrode showed different electrical properties depending on top electrode materials. The PZT capacitors with Ir, IrO 2 , and Pt top electrodes showed good leakage current characteristics, whereas those with the Ru top electrode showed a very high leakage current density. The PZT capacitor exhibited the best fatigue endurance with an IrO 2 top electrode. An Ir top electrode provided better fatigue endurance than a Pt top electrode. The PZT capacitor with an Ir-based electrode is thought to be attractive for the application to ferroelectric random access memory devices because of its wide processing window for a high-quality ferroelectric film and good polarization, fatigue, and leakage current characteristics

  16. Estimation of water quality parameters applying satellite data fusion and mining techniques in the lake Albufera de Valencia (Spain)

    Science.gov (United States)

    Doña, Carolina; Chang, Ni-Bin; Vannah, Benjamin W.; Sánchez, Juan Manuel; Delegido, Jesús; Camacho, Antonio; Caselles, Vicente

    2014-05-01

    Linked to the enforcement of the European Water Framework Directive (2000) (WFD), which establishes that all countries of the European Union have to avoid deterioration, improve and retrieve the status of the water bodies, and maintain their good ecological status, several remote sensing studies have been carried out to monitor and understand the water quality variables trend. Lake Albufera de Valencia (Spain) is a hypereutrophic system that can present chrorophyll a concentrations over 200 mg·m-3 and transparency (Secchi disk) values below 20 cm, needing to retrieve and improve its water quality. The principal aim of our work was to develop algorithms to estimate water quality parameters such as chlorophyll a concentration and water transparency, which are informative of the eutrophication and ecological status, using remote sensing data. Remote sensing data from Terra/MODIS, Landsat 5-TM and Landsat 7-ETM+ images were used to carry out this study. Landsat images are useful to analyze the spatial variability of the water quality variables, as well as to monitor small to medium size water bodies due to its 30-m spatial resolution. But, the poor temporal resolution of Landsat, with a 16-day revisit time, is an issue. In this work we tried to solve this data gap by applying fusion techniques between Landsat and MODIS images. Although the lower spatial resolution of MODIS is 250/500-m, one image per day is available. Thus, synthetic Landsat images were created using data fusion for no data acquisition dates. Good correlation values were obtained when comparing original and synthetic Landsat images. Genetic programming was used to develop models for predicting water quality. Using the reflectance bands of the synthetic Landsat images as inputs to the model, values of R2 = 0.94 and RMSE = 8 mg·m-3 were obtained when comparing modeled and observed values of chlorophyll a, and values of R2= 0.91 and RMSE = 4 cm for the transparency (Secchi disk). Finally, concentration

  17. Estimating photometric redshifts for X-ray sources in the X-ATLAS field using machine-learning techniques

    Science.gov (United States)

    Mountrichas, G.; Corral, A.; Masoura, V. A.; Georgantopoulos, I.; Ruiz, A.; Georgakakis, A.; Carrera, F. J.; Fotopoulou, S.

    2017-12-01

    We present photometric redshifts for 1031 X-ray sources in the X-ATLAS field using the machine-learning technique TPZ. X-ATLAS covers 7.1 deg2 observed with XMM-Newton within the Science Demonstration Phase of the H-ATLAS field, making it one of the largest contiguous areas of the sky with both XMM-Newton and Herschel coverage. All of the sources have available SDSS photometry, while 810 additionally have mid-IR and/or near-IR photometry. A spectroscopic sample of 5157 sources primarily in the XMM/XXL field, but also from several X-ray surveys and the SDSS DR13 redshift catalogue, was used to train the algorithm. Our analysis reveals that the algorithm performs best when the sources are split, based on their optical morphology, into point-like and extended sources. Optical photometry alone is not enough to estimate accurate photometric redshifts, but the results greatly improve when at least mid-IR photometry is added in the training process. In particular, our measurements show that the estimated photometric redshifts for the X-ray sources of the training sample have a normalized absolute median deviation, nmad ≈ 0.06, and a percentage of outliers, η = 10-14%, depending upon whether the sources are extended or point like. Our final catalogue contains photometric redshifts for 933 out of the 1031 X-ray sources with a median redshift of 0.9. The table of the photometric redshifts is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A39

  18. Bowen ratio/energy balance technique for estimating crop net CO2 assimilation, and comparison with a canopy chamber

    Science.gov (United States)

    Held, A. A.; Steduto, P.; Orgaz, F.; Matista, A.; Hsiao, T. C.

    1990-12-01

    This paper describes a Bowen ratio/energy balance (BREB) system which, in conjunction with an infra-red gas analyzer (IRGA), is referred to as BREB+ and is used to estimate evapotranspiration ( ET) and net CO2 flux ( NCF) over crop canopies. The system is composed of a net radiometer, soil heat flux plates, two psychrometers based on platinum resistance thermometers (PRT), bridge circuits to measure resistances, an IRGA, air pumps and switching valves, and a data logger. The psychrometers are triple shielded and aspirated, and with aspiration also between the two inner shields. High resistance (1 000 ohm) PRT's are used for dry and wet bulbs to minimize errors due to wiring and connector resistances. A high (55 K ohm) fixed resistance serves as one arm of the resistance bridge to ensure linearity in output signals. To minimize gaps in data, to allow measurements at short (e.g., 5 min) intervals, and to simplify operation, the psychrometers were fixed at their upper and lower position over the crop and not alternated. Instead, the PRT's, connected to the bridge circuit and the data logger, were carefully calibrated together. Field tests using a common air source showed appartent effects of the local environment around each psychrometer on the temperatures measured. ET rates estimated with the BREB system were compared to those measured with large lysimeters. Daily totals agreed within 5%. There was a tendency, however, for the lysimeter measurements to lag behind the BREB measurements. Daily patterns of NCF estimated with the BREB+ system are consistent with expectations from theories and data in the literature. Side-by-side comparisons with a stirred Mylar canopy chamber showed similar NCF patterns. On the other hand, discrepancies between the results of the two methods were quite marked in the morning or afternoon on certain dates. Part of the discrepancies may be attributed to inaccuracies in the psychrometric temperature measurements. Other possible causes

  19. Localised estimates and spatial mapping of poverty incidence in the state of Bihar in India-An application of small area estimation techniques.

    Science.gov (United States)

    Chandra, Hukum; Aditya, Kaustav; Sud, U C

    2018-01-01

    Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011-12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable.

  20. Localised estimates and spatial mapping of poverty incidence in the state of Bihar in India—An application of small area estimation techniques

    Science.gov (United States)

    Aditya, Kaustav; Sud, U. C.

    2018-01-01

    Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011–12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable. PMID:29879202

  1. Effect of gadolinium on hepatic fat quantification using multi-echo reconstruction technique with T2* correction and estimation

    Energy Technology Data Exchange (ETDEWEB)

    Ge, Mingmei; Wu, Bing; Liu, Zhiqin; Song, Hai; Meng, Xiangfeng; Wu, Xinhuai [The Military General Hospital of Beijing PLA, Department of Radiology, Beijing (China); Zhang, Jing [The 309th Hospital of Chinese People' s Liberation Army, Department of Radiology, Beijing (China)

    2016-06-15

    To determine whether hepatic fat quantification is affected by administration of gadolinium using a multiecho reconstruction technique with T2* correction and estimation. Forty-eight patients underwent the investigational sequence for hepatic fat quantification at 3.0T MRI once before and twice after administration of gadopentetate dimeglumine (0.1 mmol/kg). A one-way repeated-measures analysis of variance with pairwise comparisons was conducted to evaluate the systematic bias of fat fraction (FF) and R2* measurements between three acquisitions. Bland-Altman plots were used to assess the agreements between pre- and post-contrast FF measurements in the liver. A P value <0.05 indicated statistically significant difference. FF measurements of liver, spleen and spine revealed no significant systematic bias between the three measurements (P > 0.05 for all). Good agreements (95 % confidence interval) of FF measurements were demonstrated between pre-contrast and post-contrast1 (-0.49 %, 0.52 %) and post-contrast2 (-0.83 %, 0.77 %). R2* increased in liver and spleen (P = 0.039, P = 0.01) after administration of gadolinium. Although under the impact of an increased R2* in liver and spleen post-contrast, the investigational sequence can still obtain stable fat quantification. Therefore, it could be applied post-contrast to substantially increase the efficiency of MR examination and also provide a backup for the occasional failure of FF measurements pre-contrast. (orig.)

  2. Effect of gadolinium on hepatic fat quantification using multi-echo reconstruction technique with T2* correction and estimation

    International Nuclear Information System (INIS)

    Ge, Mingmei; Wu, Bing; Liu, Zhiqin; Song, Hai; Meng, Xiangfeng; Wu, Xinhuai; Zhang, Jing

    2016-01-01

    To determine whether hepatic fat quantification is affected by administration of gadolinium using a multiecho reconstruction technique with T2* correction and estimation. Forty-eight patients underwent the investigational sequence for hepatic fat quantification at 3.0T MRI once before and twice after administration of gadopentetate dimeglumine (0.1 mmol/kg). A one-way repeated-measures analysis of variance with pairwise comparisons was conducted to evaluate the systematic bias of fat fraction (FF) and R2* measurements between three acquisitions. Bland-Altman plots were used to assess the agreements between pre- and post-contrast FF measurements in the liver. A P value <0.05 indicated statistically significant difference. FF measurements of liver, spleen and spine revealed no significant systematic bias between the three measurements (P > 0.05 for all). Good agreements (95 % confidence interval) of FF measurements were demonstrated between pre-contrast and post-contrast1 (-0.49 %, 0.52 %) and post-contrast2 (-0.83 %, 0.77 %). R2* increased in liver and spleen (P = 0.039, P = 0.01) after administration of gadolinium. Although under the impact of an increased R2* in liver and spleen post-contrast, the investigational sequence can still obtain stable fat quantification. Therefore, it could be applied post-contrast to substantially increase the efficiency of MR examination and also provide a backup for the occasional failure of FF measurements pre-contrast. (orig.)

  3. Evaluating the use of electrical resistivity imaging technique for improving CH4 and CO2 emission rate estimations in landfills

    International Nuclear Information System (INIS)

    Georgaki, I.; Soupios, P.; Sakkas, N.; Ververidis, F.; Trantas, E.; Vallianatos, F.; Manios, T.

    2008-01-01

    In order to improve the estimation of surface gas emissions in landfill, we evaluated a combination of geophysical and greenhouse gas measurement methodologies. Based on fifteen 2D electrical resistivity tomographies (ERTs), longitudinal cross section images of the buried waste layers were developed, identifying place and cross section size of organic waste (OW), organic waste saturated in leachates (SOW), low organic and non-organic waste. CH 4 and CO 2 emission measurements were then conducted using the static chamber technique at 5 surface points along two tomographies: (a) across a high-emitting area, ERT no. 2, where different amounts of relatively fresh OW and SOW were detected, and (b) across the oldest (at least eight years) cell in the landfill, ERT no. 6, with significant amounts of OW. Where the highest emission rates were recorded, they were strongly affected by the thickness of the OW and SOW fraction underneath each gas sampling point. The main reason for lower than expected values was the age of the layered buried waste. Lower than predicted emissions were also attributed to soil condition, which was the case at sampling points with surface ponding, i.e. surface accumulation of leachate (or precipitated water)

  4. A simplification of the deuterium oxide dilution technique using FT-IR analysis of plasma, for estimating piglet milk intake

    International Nuclear Information System (INIS)

    Glencross, B.D.; Tuckey, R.C.; Hartmann, P.E.; Mullan, B.P.

    1997-01-01

    Previous studies estimating milk intake using deuterium oxide (D 2 O) as a tracer have required sublimation of the sample fluid (usually plasma) to remove solids and retrieve total water. This procedure has been simplified by directly measuring the D 2 O content of plasma with a Fourier transform-infrared (FT-IR) spectrometer, removing the requirement for sample sublimation. Comparisons of samples that were split and then analysed as water of sublimation and as total plasma were performed. It was found that the direct analysis of the plasma could be achieved without a loss in fidelity of the results (sublimated v. plasma, r 2 = 0.976; n = 26). Linearity of assay standards was very high (r 2 > 0.997). The modified technique was used to determine the milk intake by piglets from litters of 7 sows during established lactation (Days 10-15). Water turnover (WTO) was shown to be the primary point by which differences in the piglet milk intakes were influenced. Differences in the milk composition had minimal effect on the milk intake determinations. Milk intake by each piglet was shown to be strongly correlated to piglet growth (r 2 = 0.59, P 2 = 0.84, P < 0.01). Copyright (1997) CSIRO Australia

  5. A Novel Differential Time-of-Arrival Estimation Technique for Impact Localization on Carbon Fiber Laminate Sheets

    Directory of Open Access Journals (Sweden)

    Eugenio Marino Merlo

    2017-10-01

    Full Text Available Composite material structures are commonly used in many industrial sectors (aerospace, automotive, transportation, and can operate in harsh environments where impacts with other parts or debris may cause critical safety and functionality issues. This work presents a method for improving the accuracy of impact position determination using acoustic source triangulation schemes based on the data collected by piezoelectric sensors attached to the structure. A novel approach is used to estimate the Differential Time-of-Arrival (DToA between the impact response signals collected by a triplet of sensors, overcoming the limitations of classical methods that rely on amplitude thresholds calibrated for a specific sensor type. An experimental evaluation of the proposed technique was performed with specially made circular piezopolymer (PVDF sensors designed for Structural Health Monitoring (SHM applications, and compared with commercial piezoelectric SHM sensors of similar dimensions. Test impacts at low energies from 35 mJ to 600 mJ were generated in a laboratory by free-falling metal spheres on a 500 mm × 500 mm × 1.25 mm quasi-isotropic Carbon Fiber Reinforced Polymer (CFRP laminate plate. From the analysis of many impact signals, the resulting localization error was improved for all types of sensors and, in particular, for the circular PVDF sensor an average error of 20.3 mm and a standard deviation of 8.9 mm was obtained.

  6. Estimation of chromium-51 ethylene diamine tetra-acetic acid plasma clearance: A comparative assessment of simplified techniques

    International Nuclear Information System (INIS)

    Picciotto, G.; Cacace, G.; Mosso, R.; De Filippi, P.G.; Cesana, P.; Ropolo, R.

    1992-01-01

    Chromium-51 ethylene diamine tetra-acetic acid ( 51 Cr-EDTA) total plasma clearance was evaluated using a multi-sample method (i.e. 12 blood samples) as the reference compared with several simplified methods which necessitated only one or few blood samples. The following 5 methods were evaluated: Terminal slope-intercept method with 3 blood samples, simplified method of Broechner-Mortensen and 3 single-sample methods (Constable, Christensen and Groth, Tauxe). Linear regression analysis was performed. Standard error of estimate, bias and imprecision of different methods were evaluated. For 51 Cr-EDTA total plasma clearance greater than 30 ml.min -1 , the results which most approximated the reference source were obtained by the Christensen and Groth method at a sampling time of 300 min (inaccuracy of 4.9%). For clearances between 10 and 30 ml.min -1 , single-sample methods failed to give reliable results. Terminal slope-intercept and Broechner-Mortensen methods were better, with inaccuracies of 17.7% and 16.9%, respectively. Although sampling times at 180, 240 and 300 min are time-consuming for patients, 51 Cr-EDTA total plasma clearance can be accurately calculated for values greater than 10 ml.min -1 using the Broechner-Mortensen method. In patients with clearance greater than 30 ml.min -1 , single-sample techniques provide a good alternative to the multi-sample method; the choice of the method to be used depends on the degree of accuracy required. (orig.)

  7. A Comparison of Two Above-Ground Biomass Estimation Techniques Integrating Satellite-Based Remotely Sensed Data and Ground Data for Tropical and Semiarid Forests in Puerto Rico

    Science.gov (United States)

    Two above-ground forest biomass estimation techniques were evaluated for the United States Territory of Puerto Rico using predictor variables acquired from satellite based remotely sensed data and ground data from the U.S. Department of Agriculture Forest Inventory Analysis (FIA)...

  8. Estimates of Free-tropospheric NO2 Abundance from the Aura Ozone Monitoring Instrument (OMI) Using Cloud Slicing Technique

    Science.gov (United States)

    Choi, S.; Joiner, J.; Krotkov, N. A.; Choi, Y.; Duncan, B. N.; Celarier, E. A.; Bucsela, E. J.; Vasilkov, A. P.; Strahan, S. E.; Veefkind, J. P.; Cohen, R. C.; Weinheimer, A. J.; Pickering, K. E.

    2013-12-01

    Total column measurements of NO2 from space-based sensors are of interest to the atmospheric chemistry and air quality communities; the relatively short lifetime of near-surface NO2 produces satellite-observed hot-spots near pollution sources including power plants and urban areas. However, estimates of NO2 concentrations in the free-troposphere, where lifetimes are longer and the radiative impact through ozone formation is larger, are severely lacking. Such information is critical to evaluate chemistry-climate and air quality models that are used for prediction of the evolution of tropospheric ozone and its impact of climate and air quality. Here, we retrieve free-tropospheric NO2 volume mixing ratio (VMR) using the cloud slicing technique. We use cloud optical centroid pressures (OCPs) as well as collocated above-cloud vertical NO2 columns (defined as the NO2 column from top of the atmosphere to the cloud OCP) from the Ozone Monitoring Instrument (OMI). The above-cloud NO2 vertical columns used in our study are retrieved independent of a priori NO2 profile information. In the cloud-slicing approach, the slope of the above-cloud NO2 column versus the cloud optical centroid pressure is proportional to the NO2 volume mixing ratio (VMR) for a given pressure (altitude) range. We retrieve NO2 volume mixing ratios and compare the obtained NO2 VMRs with in-situ aircraft profiles measured during the NASA Intercontinental Chemical Transport Experiment Phase B (INTEX-B) campaign in 2006. The agreement is good when proper data screening is applied. In addition, the OMI cloud slicing reports a high NO2 VMR where the aircraft reported lightning NOx during the Deep Convection Clouds and Chemistry (DC3) campaign in 2012. We also provide a global seasonal climatology of free-tropospheric NO2 VMR in cloudy conditions. Enhanced NO2 in free troposphere commonly appears near polluted urban locations where NO2 produced in the boundary layer may be transported vertically out of the

  9. A comparative study of 232Th and 238U activity estimation in soil samples by gamma spectrometry and neutron activation analysis technique

    International Nuclear Information System (INIS)

    Anilkumar, Rekha; Anilkumar, S.; Narayani, K.; Babu, D.A.R.; Sharma, D.N.

    2012-01-01

    Neutron activation analysis (NAA) is a well-established analytical technique. It has many advantages as compared to the other commonly used techniques. NAA can be performed in a variety of ways depending on the element, its activity level in the sample, interference from the sample matrix and other elements, etc. This technique is used to get high analytical sensitivity and low detection limits (ppm to ppb). The high sensitivity is due to the irradiation at high neutron flux available from the research reactors and the activity measurement is done using high resolution HPGe detectors. In this paper, the activity estimation of soil samples using neutron activation and direct gamma spectrometry methods are compared. Even though the weights of samples considered and samples preparation methods are different for these two methods, the estimated activity values are comparable. (author)

  10. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    Science.gov (United States)

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  11. Two-compartment, two-sample technique for accurate estimation of effective renal plasma flow: Theoretical development and comparison with other methods

    International Nuclear Information System (INIS)

    Lear, J.L.; Feyerabend, A.; Gregory, C.

    1989-01-01

    Discordance between effective renal plasma flow (ERPF) measurements from radionuclide techniques that use single versus multiple plasma samples was investigated. In particular, the authors determined whether effects of variations in distribution volume (Vd) of iodine-131 iodohippurate on measurement of ERPF could be ignored, an assumption implicit in the single-sample technique. The influence of Vd on ERPF was found to be significant, a factor indicating an important and previously unappreciated source of error in the single-sample technique. Therefore, a new two-compartment, two-plasma-sample technique was developed on the basis of the observations that while variations in Vd occur from patient to patient, the relationship between intravascular and extravascular components of Vd and the rate of iodohippurate exchange between the components are stable throughout a wide range of physiologic and pathologic conditions. The new technique was applied in a series of 30 studies in 19 patients. Results were compared with those achieved with the reference, single-sample, and slope-intercept techniques. The new two-compartment, two-sample technique yielded estimates of ERPF that more closely agreed with the reference multiple-sample method than either the single-sample or slope-intercept techniques

  12. Critical review of on-board capacity estimation techniques for lithium-ion batteries in electric and hybrid electric vehicles

    Science.gov (United States)

    Farmann, Alexander; Waag, Wladislaw; Marongiu, Andrea; Sauer, Dirk Uwe

    2015-05-01

    This work provides an overview of available methods and algorithms for on-board capacity estimation of lithium-ion batteries. An accurate state estimation for battery management systems in electric vehicles and hybrid electric vehicles is becoming more essential due to the increasing attention paid to safety and lifetime issues. Different approaches for the estimation of State-of-Charge, State-of-Health and State-of-Function are discussed and analyzed by many authors and researchers in the past. On-board estimation of capacity in large lithium-ion battery packs is definitely one of the most crucial challenges of battery monitoring in the aforementioned vehicles. This is mostly due to high dynamic operation and conditions far from those used in laboratory environments as well as the large variation in aging behavior of each cell in the battery pack. Accurate capacity estimation allows an accurate driving range prediction and accurate calculation of a battery's maximum energy storage capability in a vehicle. At the same time it acts as an indicator for battery State-of-Health and Remaining Useful Lifetime estimation.

  13. Comparison of groundwater recharge estimation techniques in an alluvial aquifer system with an intermittent/ephemeral stream (Queensland, Australia)

    Science.gov (United States)

    King, Adam C.; Raiber, Matthias; Cox, Malcolm E.; Cendón, Dioni I.

    2017-09-01

    This study demonstrates the importance of the conceptual hydrogeological model for the estimation of groundwater recharge rates in an alluvial system interconnected with an ephemeral or intermittent stream in south-east Queensland, Australia. The losing/gaining condition of these streams is typically subject to temporal and spatial variability, and knowledge of these hydrological processes is critical for the interpretation of recharge estimates. Recharge rate estimates of 76-182 mm/year were determined using the water budget method. The water budget method provides useful broad approximations of recharge and discharge fluxes. The chloride mass balance (CMB) method and the tritium method were used on 17 and 13 sites respectively, yielding recharge rates of 1-43 mm/year (CMB) and 4-553 mm/year (tritium method). However, the conceptual hydrogeological model confirms that the results from the CMB method at some sites are not applicable in this setting because of overland flow and channel leakage. The tritium method was appropriate here and could be applied to other alluvial systems, provided that channel leakage and diffuse infiltration of rainfall can be accurately estimated. The water-table fluctuation (WTF) method was also applied to data from 16 bores; recharge estimates ranged from 0 to 721 mm/year. The WTF method was not suitable where bank storage processes occurred.

  14. Validation of an elastic registration technique to estimate anatomical lung modification in Non-Small-Cell Lung Cancer Tomotherapy

    International Nuclear Information System (INIS)

    Faggiano, Elena; Cattaneo, Giovanni M; Ciavarro, Cristina; Dell'Oca, Italo; Persano, Diego; Calandrino, Riccardo; Rizzo, Giovanna

    2011-01-01

    The study of lung parenchyma anatomical modification is useful to estimate dose discrepancies during the radiation treatment of Non-Small-Cell Lung Cancer (NSCLC) patients. We propose and validate a method, based on free-form deformation and mutual information, to elastically register planning kVCT with daily MVCT images, to estimate lung parenchyma modification during Tomotherapy. We analyzed 15 registrations between the planning kVCT and 3 MVCT images for each of the 5 NSCLC patients. Image registration accuracy was evaluated by visual inspection and, quantitatively, by Correlation Coefficients (CC) and Target Registration Errors (TRE). Finally, a lung volume correspondence analysis was performed to specifically evaluate registration accuracy in lungs. Results showed that elastic registration was always satisfactory, both qualitatively and quantitatively: TRE after elastic registration (average value of 3.6 mm) remained comparable and often smaller than voxel resolution. Lung volume variations were well estimated by elastic registration (average volume and centroid errors of 1.78% and 0.87 mm, respectively). Our results demonstrate that this method is able to estimate lung deformations in thorax MVCT, with an accuracy within 3.6 mm comparable or smaller than the voxel dimension of the kVCT and MVCT images. It could be used to estimate lung parenchyma dose variations in thoracic Tomotherapy

  15. Solar Irradiance Measurements Using Smart Devices: A Cost-Effective Technique for Estimation of Solar Irradiance for Sustainable Energy Systems

    Directory of Open Access Journals (Sweden)

    Hussein Al-Taani

    2018-02-01

    Full Text Available Solar irradiance measurement is a key component in estimating solar irradiation, which is necessary and essential to design sustainable energy systems such as photovoltaic (PV systems. The measurement is typically done with sophisticated devices designed for this purpose. In this paper we propose a smartphone-aided setup to estimate the solar irradiance in a certain location. The setup is accessible, easy to use and cost-effective. The method we propose does not have the accuracy of an irradiance meter of high precision but has the advantage of being readily accessible on any smartphone. It could serve as a quick tool to estimate irradiance measurements in the preliminary stages of PV systems design. Furthermore, it could act as a cost-effective educational tool in sustainable energy courses where understanding solar radiation variations is an important aspect.

  16. A Novel Grid Impedance Estimation Technique based on Adaptive Virtual Resistance Control Loop Applied to Distributed Generation Inverters

    DEFF Research Database (Denmark)

    Ghzaiel, Walid; Jebali-Ben Ghorbal, Manel; Slama-Belkhodja, Ilhem

    2013-01-01

    and to take the decision of either keep the DG connected, or disconnect it from the utility grid. The proposed method is based on a fast and easy grid fault detection method. A virtual damping resistance is used to drive the system to the resonance in order to extract the grid impedance parameters, both...... the power quality and even damage some sensitive loads connected at the point of the common coupling (PCC). This paper presents detection-estimation method of the grid impedance variation. This estimation tehnique aims to improve the dynamic of the distributed generation (DG) interfacing inverter control...

  17. Comparative estimation of efficiency of playing method at perfection of technique of fight for the capture of young judoists

    Directory of Open Access Journals (Sweden)

    Musakhanov A.K.

    2012-12-01

    Full Text Available The questions of efficiency of mastering of technique of fight are considered for a capture for young judoists. Directions are selected the use of methods of the strictly regulated exercise and playing methods. In research 28 judoists took part in age 8-10 years. Duration of experiment two weeks. In one group of youths conducted game on snatching out of ribbons (clothes-pins and bandages, fastened on the kimono of opponent. In the second group work of taking of basic captures and educational meetings was conducted on a task on taking of capture. The training program contained playing methods and methods of the strictly regulated exercise. Comparison of the trainings programs defined specificity of their affecting development of different indexes of technique of fight for a capture. Recommended in training on the technique of fight for a capture the combined use of methods of the strictly regulated exercise and playing methods.

  18. An application of time-frequency signal analysis technique to estimate the location of an impact source on a plate type structure

    International Nuclear Information System (INIS)

    Park, Jin Ho; Lee, Jeong Han; Choi, Young Chul; Kim, Chan Joong; Seong, Poong Hyun

    2005-01-01

    It has been reviewed whether it would be suitable that the application of the time-frequency signal analysis techniques to estimate the location of the impact source in plate structure. The STFT(Short Time Fourier Transform), WVD(Wigner-Ville distribution) and CWT(Continuous Wavelet Transform) methods are introduced and the advantages and disadvantages of those methods are described by using a simulated signal component. The essential of the above proposed techniques is to separate the traveling waves in both time and frequency domains using the dispersion characteristics of the structural waves. These time-frequency methods are expected to be more useful than the conventional time domain analyses for the impact localization problem on a plate type structure. Also it has been concluded that the smoothed WVD can give more reliable means than the other methodologies for the location estimation in a noisy environment

  19. Tree-level imputation techniques to estimate current plot-level attributes in the Pacific Northwest using paneled inventory data

    Science.gov (United States)

    Bianca Eskelson; Temesgen Hailemariam; Tara Barrett

    2009-01-01

    The Forest Inventory and Analysis program (FIA) of the US Forest Service conducts a nationwide annual inventory. One panel (20% or 10% of all plots in the eastern and western United States, respectively) is measured each year. The precision of the estimates for any given year from one panel is low, and the moving average (MA), which is considered to be the default...

  20. An Implementation of Estimation Techniques to a Hydrological Model for Prediction of Runoff to a Hydroelectric Power-Station

    Directory of Open Access Journals (Sweden)

    Magne Fjeld

    1981-01-01

    Full Text Available Parameter and state estimation algorithms have been applied to a hydrological model of a catchment area in southern Norway to yield improved control of the household of water resources and better economy and efficiency in the running of the power station, as experience proves since the system was installed on-line in the summer of 1978.

  1. Validation of abundance estimates from mark-recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    Science.gov (United States)

    Amanda E. Rosenberger; Jason B. Dunham

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln–Peterson mark–recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams....

  2. Estimating regional greenhouse gas fluxes: An uncertainty analysis of planetary boundary layer techniques and bottom-up inventories

    Science.gov (United States)

    Quantification of regional greenhouse gas (GHG) fluxes is essential for establishing mitigation strategies and evaluating their effectiveness. Here, we used multiple top-down approaches and multiple trace gas observations at a tall tower to estimate GHG regional fluxes and evaluate the GHG fluxes de...

  3. Development of a Probabilistic Technique for On-line Parameter and State Estimation in Non-linear Dynamic Systems

    International Nuclear Information System (INIS)

    Tunc Aldemir; Miller, Don W.; Hajek, Brian K.; Peng Wang

    2002-01-01

    The DSD (Dynamic System Doctor) is a system-independent, interactive software under development for on-line state/parameter estimation in dynamic systems (1), partially supported through a Nuclear Engineering Education (NEER) grant during 1998-2001. This paper summarizes the recent accomplishments in improving the user-friendliness and computational capability of DSD

  4. Estimation of the glomerular filtration rate in infants and children using iohexol and X-ray fluorescence technique

    International Nuclear Information System (INIS)

    Stake, G.

    1992-01-01

    The aim of the present study was to establish methods for the estimation of the glomerular filtration rate (GFR) in children. The conclusions and clinical implications of the study are as follows: Urography with iohexol in children had no significant influence on the GFR. Valid GFR estimates were calculated from the plasma disappearance rate obtained from two plasma samples taken three and four hour after the injection of iohexol. Both iohexol and metrizoate caused a transitory, increased renal excretion of alkaline phosphatase. GFR as well as the excretion of albumin and β 2 -microglobulin were unchanged. Using the weight-related empirical distribution volume for determination of GFR from the plasma sample taken three hour after the injection of iohexol, a high degree of agreement was found between the preliminary single sample GFR estimate and the reference, two plasma sample GFR. However, the relationship was curvilinear, and in order to obtain a value for the final three hour single sample GFR equal to the reference GFR, the preliminary value had to be corrected by a second degree correction factor. The day-to-day variations of GFRs estimated with the iohexol methods were similar to those obtained with other standard methods. In another group of infants and children, independent, but otherwise comparable to the patients who formed the basis for the single sample iohexol method, it was confirmed that valid GFR estimates were obtained from the three hour single plasma sample. GFR determinations from one hour, two hour, and four hour single samples further supported that the optimal sampling time in patients with GFR down to 20 ml per minute -1 1.73 m -2 was three hours. 53 refs., 5 figs

  5. The application of digital imaging techniques in the in vivo estimation of the body composition of pigs: a review

    NARCIS (Netherlands)

    Szabo, C.; Babinszky, L.; Verstegen, M.W.A.; Vangen, O.; Jansman, A.J.M.; Kanis, E.

    1999-01-01

    Calorimetry and comparative slaughter measurement are techniques widely used to measure chemical body composition of pigs, while dissection is the standard method to determine physical (tissue) composition of the body. The disadvantage of calorimetry is the small number of observations possible,

  6. The application of digital imaging techniques in the in vivo estimation of the body compsition of pigs : a review

    NARCIS (Netherlands)

    Szabo, C.; Babinszky, L.; Verstegen, M.W.A.; Vangen, O.; Jansman, A.J.M.; Kanis, E.

    1999-01-01

    Calorimetry and comparative slaughter measurement are techniques widely used to measure chemical body composition of pigs, while dissection is the standard method to determine physical (tissue) composition of the body. The disadvantage of calorimetry is the small number of observations possible,

  7. Comparison of mobile and stationary spore-sampling techniques for estimating virulence frequencies in aerial barley powdery mildew populations

    DEFF Research Database (Denmark)

    Hovmøller, M.S.; Munk, L.; Østergård, Hanne

    1995-01-01

    Gene frequencies in samples of aerial populations of barley powdery mildew (Erysiphe graminis f.sp. hordei), which were collected in adjacent barley areas and in successive periods of time, were compared using mobile and stationary sampling techniques. Stationary samples were collected from trap ...

  8. Arterial stiffness estimation in healthy subjects: a validation of oscillometric (Arteriograph) and tonometric (SphygmoCor) techniques.

    Science.gov (United States)

    Ring, Margareta; Eriksson, Maria Jolanta; Zierath, Juleen Rae; Caidahl, Kenneth

    2014-11-01

    Arterial stiffness is an important cardiovascular risk marker, which can be measured noninvasively with different techniques. To validate such techniques in healthy subjects, we compared the recently introduced oscillometric Arteriograph (AG) technique with the tonometric SphygmoCor (SC) method and their associations with carotid ultrasound measures and traditional risk indicators. Sixty-three healthy subjects aged 20-69 (mean 48 ± 15) years were included. We measured aortic pulse wave velocity (PWVao) and augmentation index (AIx) by AG and SC, and with SC also the PWVao standardized to 80% of the direct distance between carotid and femoral sites (St-PWVaoSC). The carotid strain, stiffness index and intima-media thickness (cIMTmean) were evaluated by ultrasound. PWVaoAG (8.00 ± 2.16 m s(-1)) was higher (Pstiffness indices by AG and SC correlate with vascular risk markers in healthy subjects. AIxao results by AG and SC are closely interrelated, but higher values are obtained by AG. In the lower range, PWVao values by AG and SC are similar, but differ for higher values. Our results imply the necessity to apply one and the same technique for repeated studies.

  9. Estimating sensitivity of the Kato-Katz technique for the diagnosis of Schistosoma mansoni and hookworm in relation to infection intensity.

    Directory of Open Access Journals (Sweden)

    Oliver Bärenbold

    2017-10-01

    Full Text Available The Kato-Katz technique is the most widely used diagnostic method in epidemiologic surveys and drug efficacy trials pertaining to intestinal schistosomiasis and soil-transmitted helminthiasis. However, the sensitivity of the technique is low, particularly for the detection of light-intensity helminth infections. Examination of multiple stool samples reduces the diagnostic error; yet, most studies rely on a single Kato-Katz thick smear, thus underestimating infection prevalence. We present a model which estimates the sensitivity of the Kato-Katz technique in Schistosoma mansoni and hookworm, as a function of infection intensity for repeated stool sampling and provide estimates of the age-dependent 'true' prevalence. We find that the sensitivity for S. mansoni diagnosis is dominated by missed light infections, which have a low probability to be diagnosed correctly even through repeated sampling. The overall sensitivity strongly depends on the mean infection intensity. In particular at an intensity of 100 eggs per gram of stool (EPG, we estimate a sensitivity of 50% and 80% for one and two samples, respectively. At an infection intensity of 300 EPG, we estimate a sensitivity of 62% for one sample and 90% for two samples. The sensitivity for hookworm diagnosis is dominated by day-to-day variation with typical values for one, two, three, and four samples equal to 50%, 75%, 85%, and 95%, respectively, while it is only weakly dependent on the mean infection intensity in the population. We recommend taking at least two samples and estimate the 'true' prevalence of S. mansoni considering the dependence of the sensitivity on the mean infection intensity and the 'true' hookworm prevalence by taking into account the sensitivity given in the current study.

  10. A comparative study of 232Th and 238U activity estimation in soil samples by gamma spectrometry and Neutron Activation Analysis (NAA) technique

    International Nuclear Information System (INIS)

    Rekha, A.K.; Anilkumar, S.; Narayani, K.; Babu, D.A.R.

    2012-01-01

    Radioactivity in the environment is mainly due to the naturally occurring radionuclides like uranium, thorium with their daughter products and potassium. Even though Gamma spectrometry is the most commonly used non destructive method for the quantification of these naturally occurring radionuclides, Neutron Activation Analysis (NAA), a well established analytical technique, can also be used. But the NAA technique is a time consuming process and needs proper standards, proper sample preparation etc. In this paper, the 232 Th and 238 U activity estimated using gamma ray spectrometry and NAA technique are compared. In the case of direct gamma spectrometry method, the samples were analysed after sealing in a 250 ml container. Whereas for the NAA, about 300 mg of each sample, after irradiation were subjected to gamma spectrometry. The 238 U and 232 Th activities (in Bq/kg) in samples were estimated after the proper efficiency correction and were compared. The estimated activities by these two methods are in good agreement. The variation in 238 U and 232 Th activity values are within ± 15% which are acceptable for environmental samples

  11. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Science.gov (United States)

    Jones, Kelly W; Lewis, David J

    2015-01-01

    Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES)--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case illustrates that

  12. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Directory of Open Access Journals (Sweden)

    Kelly W Jones

    Full Text Available Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1 matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2 fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case

  13. Fetal dose from radiotherapy photon beams: Physical basis, techniques to estimate radiation dose outside of the treatment field, biological effects and professional considerations

    International Nuclear Information System (INIS)

    Stovell, Marilyn; Blackwell, C. Robert

    1997-01-01

    Purpose/Objective: The presentation will review: 1. The physical basis of radiation dose outside of the treatment field. 2. Techniques to estimate and reduce fetal dose. 3. Clinical examples of fetal dose estimation and reduction. 4. Biological effects of fetal irradiation. 5. Professional considerations. Approximately 4000 women per year in the United States require radiotherapy during pregnancy. This report presents data and techniques that allow the medical physicist to estimate the radiation dose the fetus will receive and to reduce this dose with appropriate shielding. Out-of-beam data are presented for a variety of photon beams, including cobalt-60 gamma rays and x rays from 4 to 18 MV. Designs for simple and inexpensive to more complex and expensive types of shielding equipment are described. Clinical examples show that proper shielding can reduce the radiation dose to the fetus by 50%. In addition, a review of the biological aspects of irradiation enables estimates of the risks of lethality, growth retardation, mental retardation, malformation, sterility, cancer induction, and genetic defects to the fetus. A summary of professional considerations/recommendations is also provided as a guide for the radiation oncologist and medical physicist

  14. Validation of an extraction paper chromatography (EPC) technique for estimation of trace levels of 90Sr in 90Y solutions obtained from 90Sr/90Y generator systems

    International Nuclear Information System (INIS)

    Usha Pandey; Yogendra Kumar; Ashutosh Dash

    2014-01-01

    While the extraction paper chromatography (EPC) technique constitutes a novel paradigm for the determination of few Becquerels of 90 Sr in MBq quantities of 90 Y obtained from 90 Sr/ 90 Y generator, validation of the technique is essential to ensure its usefulness as a real time analytical tool. With a view to explore the relevance and applicability of EPC technique as a real time quality control (QC) technique for the routine estimation of 90 Sr content in generator produced 90 Y, a systematic validation study was carried out diligently not only to establish its worthiness but also to broaden its horizon. The ability of the EPC technique to separate trace amounts of Sr 2+ in the presence of large amounts of Y 3+ was verified. The specificity of the technique for Y 3+ was demonstrated with 90 Y obtained by neutron irradiation. The method was validated under real experimental conditions and compared with a QC method described in US Pharmacopeia for detection of 90 Sr levels in 90 Y radiopharmaceuticals. (author)

  15. A new technique for the estimation of sea surface salinity in the tropical Indian Ocean from OLR

    Digital Repository Service at National Institute of Oceanography (India)

    Murty, V.S.N.; Subrahmanyam, B.; Tilvi, V.; O'Brien, J.J.

    stream_size 109417 stream_content_type text/plain stream_name J_Geophys_Res_C_109_C12006.pdf.txt stream_source_info J_Geophys_Res_C_109_C12006.pdf.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 A new... Ocean. The estimated SSS at 2.5C176 C2 2.5C176 grid on monthly scale is nearer to the WOA98 SSS with lower differences within ±0.5–0.8 away from the coastal region. The estimated SSS also agrees reasonably with the observed SSS along the trans...

  16. ABOUT RISK PROCESS ESTIMATION TECHNIQUES EMPLOYED BY A VIRTUAL ORGANIZATION WHICH IS DIRECTED TOWARDS THE INSURANCE BUSINESS

    Directory of Open Access Journals (Sweden)

    Covrig Mihaela

    2008-05-01

    Full Text Available In a virtual organization directed on the insurance business, the estimations of the risk process and of the ruin probability are important concerns: for researchers, at the theoretical level, and for the management of the company, as these influence the insurer strategy. We consider the evolution over an extended period of time of the insurer surplus process. In this paper, we present some methods for the estimation of the ruin probability and for the evaluation of a reserve fund. We discuss the ruin probability with respect to: the parameters of the individual claim distribution, the load factor of premiums and the intensity parameter of the number of claims process. We analyze the model in which the premiums are computed according to the mean value principle. Also, we attempt the case when the initial capital is proportional to the expected value of the individual claim. We give numerical illustration.

  17. Where You Live Matters: Localising Environmental Impacts on Health, Nutrition and Poverty in Cambodia Using Small Area Estimation Techniques

    Science.gov (United States)

    Nilsen, K.; van Soesbergen, A.; Matthews, Z.

    2016-12-01

    Socioeconomic development depends on local environments. However, the scientific evidence quantifying the impact of environmental factors on health, nutrition and poverty at subnational levels is limited. This is because socioeconomic indicators are derived from sample surveys representative only at aggregate levels compared to environmental variables mostly available in high-resolution grids. Cambodia was selected because of its commitment to development in the context of a rapidly deteriorating environment. Having made considerable progress since 2005, access to health services is limited, a quarter of the population is still poor and 40% rural children are malnourished. Cambodia is also facing considerable environmental challenges including high deforestation rates, land degradation and natural hazards. Addressing existing gaps in the knowledge of environmental impacts on health and livelihoods, this study applies small area estimation (SAE) to quantify health, nutritional and poverty outcomes in the context of local environments. SAE produces reliable subnational estimates of socioeconomic outcomes available only from sample surveys by combining them with information from auxiliary sources (census). A model is used to explain common trades across areas and a random effect structure is applied to explain the observed extra heterogeneity. SAE models predicting health, nutrition and poverty outcomes excluding and including contextual environmental variables on natural hazards vulnerability, forest cover, climate, and agricultural production are compared. Results are mapped at regional and district levels to spatially assess the impacts of environmental variation on the outcomes. Inter and intra-regional inequalities are also estimated to examine the efficacy of health/socioeconomic policy targeting based on geographic location. Preliminary results suggest that localised environmental factors have considerable impacts on the indicators estimated and should

  18. Development of an Evapotranspiration Data Assimilation Technique for Streamflow Estimates: A Case Study in a Semi-Arid Region

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2017-09-01

    Full Text Available Streamflow estimates are substantially important as fresh water shortages increase in arid and semi-arid regions where evapotranspiration (ET is a significant contribution to the water balance. In this regard, evapotranspiration data can be assimilated into a distributed hydrological model (SWAT, Soil and Water Assessment Tool for improving streamflow estimates. The SWAT model has been widely used for streamflow estimations, but the applications combining SWAT and ET products were rare. Thus, this study aims to develop a SWAT-based evapotranspiration data assimilation system. In particular, SWAT is gridded at Hydrologic Response Unit (HRU level to incorporate gridded ET products acquired from the remote sensing-based ETMonitor model. In the modeling case, Gridded SWAT (GSWAT shows a good agreement of streamflow modeling with the original SWAT. Such a scant margin between them is due to the modeling domain mismatch caused by different HRU delineations. In the ET assimilation case, we carry out a synthetic data experiment to illustrate the state augmentation Direct Insertion (DI method and a real data experiment for the upper Heihe River Basin. The results demonstrate the benefits of the ET assimilation for improving hydrologic processes representations. In the future, more remotely sensed data can be assimilated into the data assimilation system to provide more reliable hydrological predictions.

  19. Feasibility Study on Tension Estimation Technique for Hanger Cables Using the FE Model-Based System Identification Method

    Directory of Open Access Journals (Sweden)

    Kyu-Sik Park

    2015-01-01

    Full Text Available Hanger cables in suspension bridges are partly constrained by horizontal clamps. So, existing tension estimation methods based on a single cable model are prone to higher errors as the cable gets shorter, making it more sensitive to flexural rigidity. Therefore, inverse analysis and system identification methods based on finite element models are suggested recently. In this paper, the applicability of system identification methods is investigated using the hanger cables of Gwang-An bridge. The test results show that the inverse analysis and systemic identification methods based on finite element models are more reliable than the existing string theory and linear regression method for calculating the tension in terms of natural frequency errors. However, the estimation error of tension can be varied according to the accuracy of finite element model in model based methods. In particular, the boundary conditions affect the results more profoundly when the cable gets shorter. Therefore, it is important to identify the boundary conditions through experiment if it is possible. The FE model-based tension estimation method using system identification method can take various boundary conditions into account. Also, since it is not sensitive to the number of natural frequency inputs, the availability of this system is high.

  20. Noninvasive microelectrode ion flux estimation technique (MIFE) for the study of the regulation of root membrane transport by cyclic nucleotides

    KAUST Repository

    Ordoñ ez, Natalia Maria; Shabala, Lana; Gehring, Christoph A; Shabala, Sergey Nikolayevich

    2013-01-01

    Changes in ion permeability and subsequently intracellular ion concentrations play a crucial role in intracellular and intercellular communication and, as such, confer a broad array of developmental and adaptive responses in plants. These changes are mediated by the activity of plasma-membrane based transport proteins many of which are controlled by cyclic nucleotides and/or other signaling molecules. The MIFE technique for noninvasive microelectrode ion flux measuring allows concurrent quantification of net fluxes of several ions with high spatial (μm range) and temporal (ca. 5 s) resolution, making it a powerful tool to study various aspects of downstream signaling events in plant cells. This chapter details basic protocols enabling the application of the MIFE technique to study regulation of root membrane transport in general and cyclic nucleotide mediated transport in particular. © Springer Science+Business Media New York 2013.

  1. Noninvasive microelectrode ion flux estimation technique (MIFE) for the study of the regulation of root membrane transport by cyclic nucleotides

    KAUST Repository

    Ordoñez, Natalia Maria

    2013-09-03

    Changes in ion permeability and subsequently intracellular ion concentrations play a crucial role in intracellular and intercellular communication and, as such, confer a broad array of developmental and adaptive responses in plants. These changes are mediated by the activity of plasma-membrane based transport proteins many of which are controlled by cyclic nucleotides and/or other signaling molecules. The MIFE technique for noninvasive microelectrode ion flux measuring allows concurrent quantification of net fluxes of several ions with high spatial (μm range) and temporal (ca. 5 s) resolution, making it a powerful tool to study various aspects of downstream signaling events in plant cells. This chapter details basic protocols enabling the application of the MIFE technique to study regulation of root membrane transport in general and cyclic nucleotide mediated transport in particular. © Springer Science+Business Media New York 2013.

  2. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques

    Science.gov (United States)

    Jones, Kelly W.; Lewis, David J.

    2015-01-01

    Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented—from protected areas to payments for ecosystem services (PES)—to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing ‘matching’ to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods—an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators—due to the presence of unobservable bias—that lead to differences in conclusions about effectiveness. The Ecuador case

  3. On the estimate of the transpiration in Mediterranean heterogeneous ecosystems with the coupled use of eddy covariance and sap flow techniques.

    Science.gov (United States)

    Corona, Roberto; Curreli, Matteo; Montaldo, Nicola; Oren, Ram

    2013-04-01

    Mediterranean ecosystems are commonly heterogeneous savanna-like ecosystems, with contrasting plant functional types (PFT) competing for the water use. Mediterranean regions suffer water scarcity due to the dry climate conditions. In semi-arid regions evapotranspiration (ET) is the leading loss term of the root-zone water budget with a yearly magnitude that may be roughly equal to the precipitation. Despite the attention these ecosystems are receiving, a general lack of knowledge persists about the estimate of ET and the relationship between ET and the plant survival strategies for the different PFTs under water stress. During the dry summers these water-limited heterogeneous ecosystems are mainly characterized by a simple dual PFT-landscapes with strong-resistant woody vegetation and bare soil since grass died. In these conditions due to the low signal of the land surface fluxes captured by the sonic anemometer and gas analyzer the widely used eddy covariance may fail and its ET estimate is not robust enough. In these conditions the use of the sap flow technique may have a key role, because theoretically it provides a direct estimate of the woody vegetation transpiration. Through the coupled use of the sap flow sensor observations, a 2D foot print model of the eddy covariance tower and high resolution satellite images for the estimate of the foot print land cover map, the eddy covariance measurements can be correctly interpreted, and ET components (bare soil evaporation and woody vegetation transpiration) can be separated. The case study is at the Orroli site in Sardinia (Italy). The site landscape is a mixture of Mediterranean patchy vegetation types: trees, including wild olives and cork oaks, different shrubs and herbaceous species. An extensive field campaign started in 2004. Land-surface fluxes and CO2 fluxes are estimated by an eddy covariance technique based micrometeorological tower. Soil moisture profiles were also continuously estimated using water

  4. Data assimilation and uncertainty analysis of environmental assessment problems--an application of Stochastic Transfer Function and Generalised Likelihood Uncertainty Estimation techniques

    International Nuclear Information System (INIS)

    Romanowicz, Renata; Young, Peter C.

    2003-01-01

    Stochastic Transfer Function (STF) and Generalised Likelihood Uncertainty Estimation (GLUE) techniques are outlined and applied to an environmental problem concerned with marine dose assessment. The goal of both methods in this application is the estimation and prediction of the environmental variables, together with their associated probability distributions. In particular, they are used to estimate the amount of radionuclides transferred to marine biota from a given source: the British Nuclear Fuel Ltd (BNFL) repository plant in Sellafield, UK. The complexity of the processes involved, together with the large dispersion and scarcity of observations regarding radionuclide concentrations in the marine environment, require efficient data assimilation techniques. In this regard, the basic STF methods search for identifiable, linear model structures that capture the maximum amount of information contained in the data with a minimal parameterisation. They can be extended for on-line use, based on recursively updated Bayesian estimation and, although applicable to only constant or time-variable parameter (non-stationary) linear systems in the form used in this paper, they have the potential for application to non-linear systems using recently developed State Dependent Parameter (SDP) non-linear STF models. The GLUE based-methods, on the other hand, formulate the problem of estimation using a more general Bayesian approach, usually without prior statistical identification of the model structure. As a result, they are applicable to almost any linear or non-linear stochastic model, although they are much less efficient both computationally and in their use of the information contained in the observations. As expected in this particular environmental application, it is shown that the STF methods give much narrower confidence limits for the estimates due to their more efficient use of the information contained in the data. Exploiting Monte Carlo Simulation (MCS) analysis

  5. Estimation of the Above Ground Biomass of Tropical Forests using Polarimetric and Tomographic SAR Data Acquired at P Band and 3-D Imaging Techniques

    Science.gov (United States)

    Ferro-Famil, L.; El Hajj Chehade, B.; Ho Tong Minh, D.; Tebaldini, S.; LE Toan, T.

    2016-12-01

    Developing and improving methods to monitor forest biomass in space and time is a timely challenge, especially for tropical forests, for which SAR imaging at larger wavelength presents an interesting potential. Nevertheless, directly estimating tropical forest biomass from classical 2-D SAR images may reveal a very complex and ill-conditioned problem, since a SAR echo is composed of numerous contributions, whose features and importance depend on many geophysical parameters, such has ground humidity, roughness, topography… that are not related to biomass. Recent studies showed that SAR modes of diversity, i.e. polarimetric intensity ratios or interferometric phase centers, do not fully resolve this under-determined problem, whereas Pol-InSAR tree height estimates may be related to biomass through allometric relationships, with, in general over tropical forests, significant levels of uncertainty and lack of robustness. In this context, 3-D imaging using SAR tomography represents an appealing solution at larger wavelengths, for which wave penetration properties ensures a high quality mapping of a tropical forest reflectivity in the vertical direction. This paper presents a series of studies led, in the frame of the preparation of the next ESA mission BIOMASS, on the estimation of biomass over a tropical forest in French Guiana, using Polarimetric SAR Tomographic (Pol-TomSAR) data acquired at P band by ONERA. It is then shown that Pol-TomoSAR significantly improves the retrieval of forest above ground biomass (AGB) in a high biomass forest (200 up to 500 t/ha), with an error of only 10% at 1.5-ha resolution using a reflectivity estimates sampled at a predetermined elevation. The robustness of this technique is tested by applying the same approach over another site, and results show a similar relationship between AGB and tomographic reflectivity over both sites. The excellent ability of Pol-TomSAR to retrieve both canopy top heights and ground topography with an error

  6. Probabilistic estimation of splitting coefficients of normal modes of the Earth, and their uncertainties, using an autoregressive technique

    Science.gov (United States)

    Pachhai, S.; Masters, G.; Laske, G.

    2017-12-01

    Earth's normal-mode spectra are crucial to studying the long wavelength structure of the Earth. Such observations have been used extensively to estimate "splitting coefficients" which, in turn, can be used to determine the three-dimensional velocity and density structure. Most past studies apply a non-linear iterative inversion to estimate the splitting coefficients which requires that the earthquake source is known. However, it is challenging to know the source details, particularly for big events as used in normal-mode analyses. Additionally, the final solution of the non-linear inversion can depend on the choice of damping parameter and starting model. To circumvent the need to know the source, a two-step linear inversion has been developed and successfully applied to many mantle and core sensitive modes. The first step takes combinations of the data from a single event to produce spectra known as "receiver strips". The autoregressive nature of the receiver strips can then be used to estimate the structure coefficients without the need to know the source. Based on this approach, we recently employed a neighborhood algorithm to measure the splitting coefficients for an isolated inner-core sensitive mode (13S2). This approach explores the parameter space efficiently without any need of regularization and finds the structure coefficients which best fit the observed strips. Here, we implement a Bayesian approach to data collected for earthquakes from early 2000 and more recent. This approach combines the data (through likelihood) and prior information to provide rigorous parameter values and their uncertainties for both isolated and coupled modes. The likelihood function is derived from the inferred errors of the receiver strips which allows us to retrieve proper uncertainties. Finally, we apply model selection criteria that balance the trade-offs between fit (likelihood) and model complexity to investigate the degree and type of structure (elastic and anelastic

  7. Do We Really Need Sinusoidal Surface Temperatures to Apply Heat Tracing Techniques to Estimate Streambed Fluid Fluxes?

    Science.gov (United States)

    Luce, C. H.; Tonina, D.; Applebee, R.; DeWeese, T.

    2017-12-01

    Two common refrains about using the one-dimensional advection diffusion equation to estimate fluid fluxes, thermal conductivity, or bed surface elevation from temperature time series in streambeds are that the solution assumes that 1) the surface boundary condition is a sine wave or nearly so, and 2) there is no gradient in mean temperature with depth. Concerns on these subjects are phrased in various ways, including non-stationarity in frequency, amplitude, or phase. Although the mathematical posing of the original solution to the problem might lead one to believe these constraints exist, the perception that they are a source of error is a fallacy. Here we re-derive the inverse solution of the 1-D advection-diffusion equation starting with an arbitrary surface boundary condition for temperature. In doing so, we demonstrate the frequency-independence of the solution, meaning any single frequency can be used in the frequency-domain solutions to estimate thermal diffusivity and 1-D fluid flux in streambeds, even if the forcing has multiple frequencies. This means that diurnal variations with asymmetric shapes, gradients in the mean temperature with depth, or `non-stationary' amplitude and frequency (or phase) do not actually represent violations of assumptions, and they should not cause errors in estimates when using one of the suite of existing solution methods derived based on a single frequency. Misattribution of errors to these issues constrains progress on solving real sources of error. Numerical and physical experiments are used to verify this conclusion and consider the utility of information at `non-standard' frequencies and multiple frequencies to augment the information derived from time series of temperature.

  8. Very Fast Estimation of Epicentral Distance and Magnitude from a Single Three Component Seismic Station Using Machine Learning Techniques

    Science.gov (United States)

    Ochoa Gutierrez, L. H.; Niño Vasquez, L. F.; Vargas-Jimenez, C. A.

    2012-12-01

    To minimize adverse effects originated by high magnitude earthquakes, early warning has become a powerful tool to anticipate a seismic wave arrival to an specific location and lets to bring people and government agencies opportune information to initiate a fast response. To do this, a very fast and accurate characterization of the event must be done but this process is often made using seismograms recorded in at least 4 stations where processing time is usually greater than the wave travel time to the interest area, mainly in coarse networks. A faster process can be done if only one three component seismic station is used that is the closest unsaturated station respect to the epicenter. Here we present a Support Vector Regression algorithm which calculates Magnitude and Epicentral Distance using only 5 seconds of signal since P wave onset. This algorithm was trained with 36 records of historical earthquakes where the input were regression parameters of an exponential function estimated by least squares, corresponding to the waveform envelope and the maximum value of the observed waveform for each component in one single station. A 10 fold Cross Validation was applied for a Normalized Polynomial Kernel obtaining the mean absolute error for different exponents and complexity parameters. Magnitude could be estimated with 0.16 of mean absolute error and the distance with an error of 7.5 km for distances within 60 to 120 km. This kind of algorithm is easy to implement in hardware and can be used directly in the field station to make possible the broadcast of estimations of this values to generate fast decisions at seismological control centers, increasing the possibility to have an effective reactiontribute and Descriptors calculator for SVR model training and test

  9. Fast Estimation of Epicentral Distance and Magnitude from a Single Three Component Seismic Station Using Machine Learning Techniques

    Science.gov (United States)

    Ochoa Gutierrez, L. H.; Niño, L. F.; Vargas-Jimenez, C. A.

    2013-05-01

    To minimize adverse effects originated by high magnitude earthquakes, early warning has become a powerful tool to anticipate a seismic wave arrival to an specific location, bringing opportune information to people and government agencies to initiate a fast response. To do this, a very fast and accurate characterization of the event must be done but this process is often made using seismograms recorded in at least four stations where processing time is usually greater than the wave arrival time to the interest area, mainly in seismological coarse networks. A faster process can be done if only one three component seismic station, the closest unsaturated station with respect to the epicenter, is used. Here, we present a Support Vector Regression algorithm which calculates Magnitude and Epicentral Distance using only five seconds of signal since P wave onset. This algorithm was trained with 36 records of historical earthquakes, where the input included regression parameters of an exponential function estimated by least squares, of the waveform envelope and the maximum value of the observed waveform for each component in a single station. A ten-fold Cross Validation was applied for a Normalized Polynomial Kernel obtaining the mean absolute error for different exponents and complexity parameters. The Magnitude could be estimated with 0.16 units of mean absolute error and the distance with an error of 7.5 km for distances within 60 to 120 km. This kind of algorithm is easy to implement in hardware and can be used directly in the field seismological sensor to make the broadcast of estimations of these values possible to generate fast decisions at seismological control centers, increasing the possibility of having an effective reaction.

  10. Assessment of a Technique for Estimating Total Column Water Vapor Using Measurements of the Infrared Sky Temperature

    Science.gov (United States)

    Merceret, Francis J.; Huddleston, Lisa L.

    2014-01-01

    A method for estimating the integrated precipitable water (IPW) content of the atmosphere using measurements of indicated infrared zenith sky temperature was validated over east-central Florida. The method uses inexpensive, commercial off the shelf, hand-held infrared thermometers (IRT). Two such IRTs were obtained from a commercial vendor, calibrated against several laboratory reference sources at KSC, and used to make IR zenith sky temperature measurements in the vicinity of KSC and Cape Canaveral Air Force Station (CCAFS). The calibration and comparison data showed that these inexpensive IRTs provided reliable, stable IR temperature measurements that were well correlated with the NOAA IPW observations.

  11. Comparison of Gene Expression Programming with neuro-fuzzy and neural network computing techniques in estimating daily incoming solar radiation in the Basque Country (Northern Spain)

    International Nuclear Information System (INIS)

    Landeras, Gorka; López, José Javier; Kisi, Ozgur; Shiri, Jalal

    2012-01-01

    Highlights: ► Solar radiation estimation based on Gene Expression Programming is unexplored. ► This approach is evaluated for the first time in this study. ► Other artificial intelligence models (ANN and ANFIS) are also included in the study. ► New alternatives for solar radiation estimation based on temperatures are provided. - Abstract: Surface incoming solar radiation is a key variable for many agricultural, meteorological and solar energy conversion related applications. In absence of the required meteorological sensors for the detection of global solar radiation it is necessary to estimate this variable. Temperature based modeling procedures are reported in this study for estimating daily incoming solar radiation by using Gene Expression Programming (GEP) for the first time, and other artificial intelligence models such as Artificial Neural Networks (ANNs), and Adaptive Neuro-Fuzzy Inference System (ANFIS). A comparison was also made among these techniques and traditional temperature based global solar radiation estimation equations. Root mean square error (RMSE), mean absolute error (MAE) RMSE-based skill score (SS RMSE ), MAE-based skill score (SS MAE ) and r 2 criterion of Nash and Sutcliffe criteria were used to assess the models’ performances. An ANN (a four-input multilayer perceptron with 10 neurons in the hidden layer) presented the best performance among the studied models (2.93 MJ m −2 d −1 of RMSE). The ability of GEP approach to model global solar radiation based on daily atmospheric variables was found to be satisfactory.

  12. Benthic O-2 uptake of two cold-water coral communities estimated with the non-invasive eddy correlation technique

    DEFF Research Database (Denmark)

    Rovelli, Lorenzo; Attard, Karl M.; Bryant, Lee D.

    2015-01-01

    , was a channel-like sound in Northern Norway at a depth of 220 m. Both sites were characterized by the presence of live mounds of the reef framework-forming scleractinian Lophelia pertusa and reef-associated fauna such as sponges, crustaceans and other corals. The measured O-2 uptake at the 2 sites varied...... times higher than the global mean for soft sediment communities at comparable depths. The measurements document the importance of CWC communities for local and regional carbon cycling and demonstrate that the EC technique is a valuable tool for assessing rates of benthic O2 uptake in such complex...

  13. A novel technique for optimal integration of active steering and differential braking with estimation to improve vehicle directional stability.

    Science.gov (United States)

    Mirzaeinejad, Hossein; Mirzaei, Mehdi; Rafatnia, Sadra

    2018-06-11

    This study deals with the enhancement of directional stability of vehicle which turns with high speeds on various road conditions using integrated active steering and differential braking systems. In this respect, the minimum usage of intentional asymmetric braking force to compensate the drawbacks of active steering control with small reduction of vehicle longitudinal speed is desired. To this aim, a new optimal multivariable controller is analytically developed for integrated steering and braking systems based on the prediction of vehicle nonlinear responses. A fuzzy programming extracted from the nonlinear phase plane analysis is also used for managing the two control inputs in various driving conditions. With the proposed fuzzy programming, the weight factors of the control inputs are automatically tuned and softly changed. In order to simulate a real-world control system, some required information about the system states and parameters which cannot be directly measured, are estimated using the Unscented Kalman Filter (UKF). Finally, simulations studies are carried out using a validated vehicle model to show the effectiveness of the proposed integrated control system in the presence of model uncertainties and estimation errors. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Technique for Determination of Rational Boundaries in Combining Construction and Installation Processes Based on Quantitative Estimation of Technological Connections

    Science.gov (United States)

    Gusev, E. V.; Mukhametzyanov, Z. R.; Razyapov, R. V.

    2017-11-01

    The problems of the existing methods for the determination of combining and technologically interlinked construction processes and activities are considered under the modern construction conditions of various facilities. The necessity to identify common parameters that characterize the interaction nature of all the technology-related construction and installation processes and activities is shown. The research of the technologies of construction and installation processes for buildings and structures with the goal of determining a common parameter for evaluating the relationship between technologically interconnected processes and construction works are conducted. The result of this research was to identify the quantitative evaluation of interaction construction and installation processes and activities in a minimum technologically necessary volume of the previous process allowing one to plan and organize the execution of a subsequent technologically interconnected process. The quantitative evaluation is used as the basis for the calculation of the optimum range of the combination of processes and activities. The calculation method is based on the use of the graph theory. The authors applied a generic characterization parameter to reveal the technological links between construction and installation processes, and the proposed technique has adaptive properties which are key for wide use in organizational decisions forming. The article provides a written practical significance of the developed technique.

  15. SVPWM Technique with Varying DC-Link Voltage for Common Mode Voltage Reduction in a Matrix Converter and Analytical Estimation of its Output Voltage Distortion

    Science.gov (United States)

    Padhee, Varsha

    Common Mode Voltage (CMV) in any power converter has been the major contributor to premature motor failures, bearing deterioration, shaft voltage build up and electromagnetic interference. Intelligent control methods like Space Vector Pulse Width Modulation (SVPWM) techniques provide immense potential and flexibility to reduce CMV, thereby targeting all the afore mentioned problems. Other solutions like passive filters, shielded cables and EMI filters add to the volume and cost metrics of the entire system. Smart SVPWM techniques therefore, come with a very important advantage of being an economical solution. This thesis discusses a modified space vector technique applied to an Indirect Matrix Converter (IMC) which results in the reduction of common mode voltages and other advanced features. The conventional indirect space vector pulse-width modulation (SVPWM) method of controlling matrix converters involves the usage of two adjacent active vectors and one zero vector for both rectifying and inverting stages of the converter. By suitable selection of space vectors, the rectifying stage of the matrix converter can generate different levels of virtual DC-link voltage. This capability can be exploited for operation of the converter in different ranges of modulation indices for varying machine speeds. This results in lower common mode voltage and improves the harmonic spectrum of the output voltage, without increasing the number of switching transitions as compared to conventional modulation. To summarize it can be said that the responsibility of formulating output voltages with a particular magnitude and frequency has been transferred solely to the rectifying stage of the IMC. Estimation of degree of distortion in the three phase output voltage is another facet discussed in this thesis. An understanding of the SVPWM technique and the switching sequence of the space vectors in detail gives the potential to estimate the RMS value of the switched output voltage of any

  16. Comparison of least-squares vs. maximum likelihood estimation for standard spectrum technique of β−γ coincidence spectrum analysis

    International Nuclear Information System (INIS)

    Lowrey, Justin D.; Biegalski, Steven R.F.

    2012-01-01

    The spectrum deconvolution analysis tool (SDAT) software code was written and tested at The University of Texas at Austin utilizing the standard spectrum technique to determine activity levels of Xe-131m, Xe-133m, Xe-133, and Xe-135 in β–γ coincidence spectra. SDAT was originally written to utilize the method of least-squares to calculate the activity of each radionuclide component in the spectrum. Recently, maximum likelihood estimation was also incorporated into the SDAT tool. This is a robust statistical technique to determine the parameters that maximize the Poisson distribution likelihood function of the sample data. In this case it is used to parameterize the activity level of each of the radioxenon components in the spectra. A new test dataset was constructed utilizing Xe-131m placed on a Xe-133 background to compare the robustness of the least-squares and maximum likelihood estimation methods for low counting statistics data. The Xe-131m spectra were collected independently from the Xe-133 spectra and added to generate the spectra in the test dataset. The true independent counts of Xe-131m and Xe-133 are known, as they were calculated before the spectra were added together. Spectra with both high and low counting statistics are analyzed. Studies are also performed by analyzing only the 30 keV X-ray region of the β–γ coincidence spectra. Results show that maximum likelihood estimation slightly outperforms least-squares for low counting statistics data.

  17. Analysis of Cross-Seasonal Spectral Response from Kettle Holes: Application of Remote Sensing Techniques for Chlorophyll Estimation

    Directory of Open Access Journals (Sweden)

    Bernd Lennartz

    2012-11-01

    Full Text Available Kettle holes, small inland water bodies usually less than 1 ha in size, are subjected to pollution, drainage, and structural alteration by intensive land use practices. This study presents the analysis of spectral signatures from kettle holes based on in situ water sampling and reflectance measurements in application for chlorophyll estimation. Water samples and surface reflectance from kettle holes were collected from 6 ponds in 15 field campaigns (5 in 2007 and 10 in 2008, resulting in a total of 80 spectral datasets. We assessed the existing semi-empirical algorithms to determine chlorophyll content for different types of kettle holes using seasonal and cross-seasonal volume reflectance and derivative spectra. Based on this analysis and optical properties of water leaving reflectance from kettle holes, the following typology of the remote signal interpretation was proposed: Submerged vegetation, Phytoplankton dominated and Mixed type.

  18. Estimation technique of corrective effects for forecasting of reliability of the designed and operated objects of the generating systems

    Science.gov (United States)

    Truhanov, V. N.; Sultanov, M. M.

    2017-11-01

    In the present article researches of statistical material on the refusals and malfunctions influencing operability of heat power installations have been conducted. In this article the mathematical model of change of output characteristics of the turbine depending on number of the refusals revealed in use has been presented. The mathematical model is based on methods of mathematical statistics, probability theory and methods of matrix calculation. The novelty of this model is that it allows to predict the change of the output characteristic in time, and the operating influences have been presented in an explicit form. As desirable dynamics of change of the output characteristic (function, reliability) the law of distribution of Veybull which is universal is adopted since at various values of parameters it turns into other types of distributions (for example, exponential, normal, etc.) It should be noted that the choice of the desirable law of management allows to determine the necessary management parameters with use of the saved-up change of the output characteristic in general. The output characteristic can be changed both on the speed of change of management parameters, and on acceleration of change of management parameters. In this article the technique of an assessment of the pseudo-return matrix has been stated in detail by the method of the smallest squares and the standard Microsoft Excel functions. Also the technique of finding of the operating effects when finding restrictions both for the output characteristic, and on management parameters has been considered. In the article the order and the sequence of finding of management parameters has been stated. A concrete example of finding of the operating effects in the course of long-term operation of turbines has been shown.

  19. Remote sensing estimation of the total phosphorus concentration in a large lake using band combinations and regional multivariate statistical modeling techniques.

    Science.gov (United States)

    Gao, Yongnian; Gao, Junfeng; Yin, Hongbin; Liu, Chuansheng; Xia, Ting; Wang, Jing; Huang, Qi

    2015-03-15

    Remote sensing has been widely used for ater quality monitoring, but most of these monitoring studies have only focused on a few water quality variables, such as chlorophyll-a, turbidity, and total suspended solids, which have typically been considered optically active variables. Remote sensing presents a challenge in estimating the phosphorus concentration in water. The total phosphorus (TP) in lakes has been estimated from remotely sensed observations, primarily using the simple individual band ratio or their natural logarithm and the statistical regression method based on the field TP data and the spectral reflectance. In this study, we investigated the possibility of establishing a spatial modeling scheme to estimate the TP concentration of a large lake from multi-spectral satellite imagery using band combinations and regional multivariate statistical modeling techniques, and we tested the applicability of the spatial modeling scheme. The results showed that HJ-1A CCD multi-spectral satellite imagery can be used to estimate the TP concentration in a lake. The correlation and regression analysis showed a highly significant positive relationship between the TP concentration and certain remotely sensed combination variables. The proposed modeling scheme had a higher accuracy for the TP concentration estimation in the large lake compared with the traditional individual band ratio method and the whole-lake scale regression-modeling scheme. The TP concentration values showed a clear spatial variability and were high in western Lake Chaohu and relatively low in eastern Lake Chaohu. The northernmost portion, the northeastern coastal zone and the southeastern portion of western Lake Chaohu had the highest TP concentrations, and the other regions had the lowest TP concentration values, except for the coastal zone of eastern Lake Chaohu. These results strongly suggested that the proposed modeling scheme, i.e., the band combinations and the regional multivariate

  20. TU-H-207A-09: An Automated Technique for Estimating Patient-Specific Regional Imparted Energy and Dose From TCM CT Exams Across 13 Protocols

    International Nuclear Information System (INIS)

    Sanders, J; Tian, X; Segars, P; Boone, J; Samei, E

    2016-01-01

    Purpose: To develop an automated technique for estimating patient-specific regional imparted energy and dose from tube current modulated (TCM) computed tomography (CT) exams across a diverse set of head and body protocols. Methods: A library of 58 adult computational anthropomorphic extended cardiac-torso (XCAT) phantoms were used to model a patient population. A validated Monte Carlo program was used to simulate TCM CT exams on the entire library of phantoms for three head and 10 body protocols. The net imparted energy to the phantoms, normalized by dose length product (DLP), and the net tissue mass in each of the scan regions were computed. A knowledgebase containing relationships between normalized imparted energy and scanned mass was established. An automated computer algorithm was written to estimate the scanned mass from actual clinical CT exams. The scanned mass estimate, DLP of the exam, and knowledgebase were used to estimate the imparted energy to the patient. The algorithm was tested on 20 chest and 20 abdominopelvic TCM CT exams. Results: The normalized imparted energy increased with increasing kV for all protocols. However, the normalized imparted energy was relatively unaffected by the strength of the TCM. The average imparted energy was 681 ± 376 mJ for abdominopelvic exams and 274 ± 141 mJ for chest exams. Overall, the method was successful in providing patientspecific estimates of imparted energy for 98% of the cases tested. Conclusion: Imparted energy normalized by DLP increased with increasing tube potential. However, the strength of the TCM did not have a significant effect on the net amount of energy deposited to tissue. The automated program can be implemented into the clinical workflow to provide estimates of regional imparted energy and dose across a diverse set of clinical protocols.

  1. Taenia solium porcine cysticercosis in Madagascar: Comparison of immuno-diagnostic techniques and estimation of the prevalence in pork carcasses traded in Antananarivo city.

    Science.gov (United States)

    Porphyre, V; Betson, M; Rabezanahary, H; Mboussou, Y; Zafindraibe, N J; Rasamoelina-Andriamanivo, H; Costard, S; Pfeiffer, D U; Michault, A

    2016-03-30

    Taenia solium cysticercosis was reported in official veterinary and medical statistics to be highly prevalent in pigs and humans in Madagascar, but few estimates are available for pigs. This study aimed to estimate the seroprevalence of porcine cysticercosis among pigs slaughtered in Antananarivo abattoirs. Firstly, the diagnostic performance of two antigen-ELISA techniques (B158B60 Ag-ELISA and HP10 Ag-ELISA) and an immunoblotting method were compared with meat inspection procedures on a sample of pigs suspected to be infected with (group 1; n=250) or free of (group 2; n=250) T. solium based on direct veterinary inspection in Madagascar. Sensitivity and specificity of the antigen ELISAs were then estimated using a Bayesian approach for detection of porcine cysticercosis in the absence of a gold standard. Then, a third set of pig sera (group 3, n=250) was randomly collected in Antananarivo slaughterhouses and tested to estimate the overall prevalence of T. solium contamination in pork meat traded in Antananarivo. The antigen ELISAs showed a high sensitivity (>84%), but the B158B60 Ag-ELISA appeared to be more specific than the HP10 Ag-ELISA (model 1: 95% vs 74%; model 2: 87% vs 71%). The overall prevalence of porcine cysticercosis in Antananarivo slaughterhouses was estimated at 2.3% (95% credibility interval [95%CrI]: 0.09-9.1%) to 2.6% (95%CrI: 0.1-10.3%) depending on the model and priors used. Since the sample used in this study is not representative of the national pig population, village-based surveys and longitudinal monitoring at slaughter are needed to better estimate the overall prevalence, geographical patterns and main risk factors for T. solium contamination, in order to improve control policies. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Environmental challenges and opportunities of the evolving North American electricity market : Modeling techniques and estimating environmental outcomes

    International Nuclear Information System (INIS)

    Patterson, Z.

    2002-06-01

    Background information and results of the different models publicly available used for the evaluation of environmental effects of electricity market restructuring in the various jurisdictions in North America were included in this working paper. It comprised the description of eleven models and twelve modeling exercises. The information on each model varied greatly, as it is proprietary. The models described were: (1) the Energy Information Administration's (EIA) National Energy Modeling System (NEMS), (2) the Department of Energy's Policy Office Electricity Modeling System (POEMS), (3) the Integrated Planning Model (IPM) utilized by the United States Environmental Protection Agency (US EPA), (4) Resources for the Future's (RFF) Haiku model, (5) the Canadian Energy Research Institute's Energy 2020 Model, (6) the Federal Energy Regulatory Commission's (FERC) use of ICF's Coal and Electric Utilities Model, (7) the Center for Clean Air Policy's use of General Electric's Market Assessment and Portfolio Strategies (GE MAPS) model, (8) the Center for Clean Air Policy's use of GE MAPS in combination with New Energy Associates' Proscreen II, (9) the Commission for Environmental Cooperation use of the Front of Envelope Model, (10) Ontario Power Generation's use of the Utility Fuel Economics Model and National Power Model, and (11) New York State Department of Public Service's (NYDPS) Final Generic Environmental Impact Statement using New Energy Associates' PROMOD. Also included in this working paper was a comparison of the results of models and modeling exercises on which the estimation of the environmental effects of electricity market restructuring in the United States was based. 18 refs., 5 tabs

  3. Application of a stratified random sampling technique to the estimation and minimization of respirable quartz exposure to underground miners

    International Nuclear Information System (INIS)

    Makepeace, C.E.; Horvath, F.J.; Stocker, H.

    1981-11-01

    The aim of a stratified random sampling plan is to provide the best estimate (in the absence of full-shift personal gravimetric sampling) of personal exposure to respirable quartz among underground miners. One also gains information of the exposure distribution of all the miners at the same time. Three variables (or strata) are considered in the present scheme: locations, occupations and times of sampling. Random sampling within each stratum ensures that each location, occupation and time of sampling has equal opportunity of being selected without bias. Following implementation of the plan and analysis of collected data, one can determine the individual exposures and the mean. This information can then be used to identify those groups whose exposure contributes significantly to the collective exposure. In turn, this identification, along with other considerations, allows the mine operator to carry out a cost-benefit optimization and eventual implementation of engineering controls for these groups. This optimization and engineering control procedure, together with the random sampling plan, can then be used in an iterative manner to minimize the mean value of the distribution and collective exposures

  4. Probability estimation of rare extreme events in the case of small samples: Technique and examples of analysis of earthquake catalogs

    Science.gov (United States)

    Pisarenko, V. F.; Rodkin, M. V.; Rukavishnikova, T. A.

    2017-11-01

    The most general approach to studying the recurrence law in the area of the rare largest events is associated with the use of limit law theorems of the theory of extreme values. In this paper, we use the Generalized Pareto Distribution (GPD). The unknown GPD parameters are typically determined by the method of maximal likelihood (ML). However, the ML estimation is only optimal for the case of fairly large samples (>200-300), whereas in many practical important cases, there are only dozens of large events. It is shown that in the case of a small number of events, the highest accuracy in the case of using the GPD is provided by the method of quantiles (MQs). In order to illustrate the obtained methodical results, we have formed the compiled data sets characterizing the tails of the distributions for typical subduction zones, regions of intracontinental seismicity, and for the zones of midoceanic (MO) ridges. This approach paves the way for designing a new method for seismic risk assessment. Here, instead of the unstable characteristics—the uppermost possible magnitude M max—it is recommended to use the quantiles of the distribution of random maxima for a future time interval. The results of calculating such quantiles are presented.

  5. Hydraulic Geometry, GIS and Remote Sensing, Techniques against Rainfall-Runoff Models for Estimating Flood Magnitude in Ephemeral Fluvial Systems

    Directory of Open Access Journals (Sweden)

    Rafael Garcia-Lorenzo

    2010-11-01

    Full Text Available This paper shows the combined use of remotely sensed data and hydraulic geometry methods as an alternative to rainfall-runoff models. Hydraulic geometric data and boolean images of water sheets obtained from satellite images after storm events were integrated in a Geographical Information System. Channel cross-sections were extracted from a high resolution Digital Terrain Model (DTM and superimposed on the image cover to estimate the peak flow using HEC-RAS. The proposed methodology has been tested in ephemeral channels (ramblas on the coastal zone in south-eastern Spain. These fluvial systems constitute an important natural hazard due to their high discharges and sediment loads. In particular, different areas affected by floods during the period 1997 to 2009 were delimited through HEC-GeoRAs from hydraulic geometry data and Landsat images of these floods (Landsat‑TM5 and Landsat-ETM+7. Such an approach has been validated against rainfall-surface runoff models (SCS Dimensionless Unit Hydrograph, SCSD, Témez gamma HU Tγ and the Modified Rational method, MRM comparing their results with flood hydrographs of the Automatic Hydrologic Information System (AHIS in several ephemeral channels in the Murcia Region. The results obtained from the method providing a better fit were used to calculate different hydraulic geometry parameters, especially in residual flood areas.

  6. a New Technique Based on Mini-Uas for Estimating Water and Bottom Radiance Contributions in Optically Shallow Waters

    Science.gov (United States)

    Montes-Hugo, M. A.; Barrado, C.; Pastor, E.

    2015-08-01

    The mapping of nearshore bathymetry based on spaceborne radiometers is commonly used for QC ocean colour products in littoral waters. However, the accuracy of these estimates is relatively poor with respect to those derived from Lidar systems due in part to the large uncertainties of bottom depth retrievals caused by changes on bottom reflectivity. Here, we present a method based on mini unmanned aerial vehicles (UAS) images for discriminating bottom-reflected and water radiance components by taking advantage of shadows created by different structures sitting on the bottom boundary. Aerial surveys were done with a drone Draganfly X4P during October 1 2013 in optically shallow waters of the Saint Lawrence Estuary, and during low tide. Colour images with a spatial resolution of 3 mm were obtained with an Olympus EPM-1 camera at 10 m height. Preliminary results showed an increase of the relative difference between bright and dark pixels (dP) toward the red wavelengths of the camera's receiver. This is suggesting that dP values can be potentially used as a quantitative proxy of bottom reflectivity after removing artefacts related to Fresnel reflection and bottom adjacency effects.

  7. A comparative study on the production rates of VFA and bacteria in the rumen of buffalo and goat estimated by isotope dilution technique

    International Nuclear Information System (INIS)

    Verma, D.N.; Mehra, U.R.; Singh, U.B.; Ranjhan, S.K.

    1977-01-01

    Digestibility trials were conducted on Murrah buffaloes and Barbari goats with rumen cannulae in the rumen to determine the digestibility of the feed constituents and the production rates of bacteria and total VFA were estimated in the rumen by isotope dilution technique. The bacterial cells growth in the rumen was more in goats than buffaloes when fed ad libitum and calculated on equal feed intake, where as, in buffaloes fed on restricted diet equal to the goats the production of bacteria and VFA were higher. Goats converted 54.04 percent of their dietary nitrogen into microbial nitrogen which was more than twice of buffaloes. (author)

  8. Discrete wavelet transform-based denoising technique for advanced state-of-charge estimator of a lithium-ion battery in electric vehicles

    International Nuclear Information System (INIS)

    Lee, Seongjun; Kim, Jonghoon

    2015-01-01

    Sophisticated data of the experimental DCV (discharging/charging voltage) of a lithium-ion battery is required for high-accuracy SOC (state-of-charge) estimation algorithms based on the state-space ECM (electrical circuit model) in BMSs (battery management systems). However, when sensing noisy DCV signals, erroneous SOC estimation (which results in low BMS performance) is inevitable. Therefore, this manuscript describes the design and implementation of a DWT (discrete wavelet transform)-based denoising technique for DCV signals. The steps for denoising a noisy DCV measurement in the proposed approach are as follows. First, using MRA (multi-resolution analysis), the noise-riding DCV signal is decomposed into different frequency sub-bands (low- and high-frequency components, A n and D n ). Specifically, signal processing of the high frequency component D n that focuses on a short-time interval is necessary to reduce noise in the DCV measurement. Second, a hard-thresholding-based denoising rule is applied to adjust the wavelet coefficients of the DWT to achieve a clear separation between the signal and the noise. Third, the desired de-noised DCV signal is reconstructed by taking the IDWT (inverse discrete wavelet transform) of the filtered detailed coefficients. Finally, this signal is sent to the ECM-based SOC estimation algorithm using an EKF (extended Kalman filter). Experimental results indicate the robustness of the proposed approach for reliable SOC estimation. - Highlights: • Sophisticated data of the experimental DCV is required for high-accuracy SOC. • DWT (discrete wavelet transform)-based denoising technique is newly investigated. • Three steps for denoising a noisy DCV measurement in this work are implemented. • Experimental results indicate the robustness of the proposed work for reliable SOC

  9. Size-exclusion chromatography (HPLC-SEC) technique optimization by simplex method to estimate molecular weight distribution of agave fructans.

    Science.gov (United States)

    Moreno-Vilet, Lorena; Bostyn, Stéphane; Flores-Montaño, Jose-Luis; Camacho-Ruiz, Rosa-María

    2017-12-15

    Agave fructans are increasingly important in food industry and nutrition sciences as a potential ingredient of functional food, thus practical analysis tools to characterize them are needed. In view of the importance of the molecular weight on the functional properties of agave fructans, this study has the purpose to optimize a method to determine their molecular weight distribution by HPLC-SEC for industrial application. The optimization was carried out using a simplex method. The optimum conditions obtained were at column temperature of 61.7°C using tri-distilled water without salt, adjusted pH of 5.4 and a flow rate of 0.36mL/min. The exclusion range is from 1 to 49 of polymerization degree (180-7966Da). This proposed method represents an accurate and fast alternative to standard methods involving multiple-detection or hydrolysis of fructans. The industrial applications of this technique might be for quality control, study of fractionation processes and determination of purity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Estimating technical efficiency in the hospital sector with panel data: a comparison of parametric and non-parametric techniques.

    Science.gov (United States)

    Siciliani, Luigi

    2006-01-01

    Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.

  11. Utilization of noise analysis technique for mechanical vibrations estimation in the ATUCHA1 and Embalse Argentine NPP

    International Nuclear Information System (INIS)

    Lescano, V.H.; Wentzeis, L.M.; Guevara, M.; Moreno, C.; Pineyro, J.

    1996-01-01

    In Argentine, comprehensive noise measurements have been performed with the reactor instrumentation of the PHWR power plant Atucha I and Embalse. The Embalse reactor is a CANDU-600 (600 Mwe) type pressurized heavy water reactor. It's a heavy water moderator and heavy water cooled natural uranium fueled pressure tube system. Signal of vanadium and platinum type in core-self power neutron detectors of ex-core ion chambers and of a moderator pressure sensor have been recorded and analysed. The vibration of reactor internals as vertical and horizontal in-core neutron flux detectors units and the coolant channels systems, consisting of calandria and pressure tubes with fuel bundles, have been identified and monitored during normal reactor operation. Atucha I, is a PHWR reactor natural uranium fueled, and heavy water moderated and cooled. Neutron noise techniques using of ex-core ionization chambers and in-core Vanadium SPND's were implemented, among others, in order to produce early detection of anomalous vibrations in the reactor internals. Noise analysis was successfully performed to identify normal and peculiar vibrations in particular reactor internals. (author)

  12. Estimating representative background PM2.5 concentration in heavily polluted areas using baseline separation technique and chemical mass balance model

    Science.gov (United States)

    Gao, Shuang; Yang, Wen; Zhang, Hui; Sun, Yanling; Mao, Jian; Ma, Zhenxing; Cong, Zhiyuan; Zhang, Xian; Tian, Shasha; Azzi, Merched; Chen, Li; Bai, Zhipeng

    2018-02-01

    The determination of background concentration of PM2.5 is important to understand the contribution of local emission sources to total PM2.5 concentration. The purpose of this study was to exam the performance of baseline separation techniques to estimate PM2.5 background concentration. Five separation methods, which included recursive digital filters (Lyne-Hollick, one-parameter algorithm, and Boughton two-parameter algorithm), sliding interval and smoothed minima, were applied to one-year PM2.5 time-series data in two heavily polluted cities, Tianjin and Jinan. To obtain the proper filter parameters and recession constants for the separation techniques, we conducted regression analysis at a background site during the emission reduction period enforced by the Government for the 2014 Asia-Pacific Economic Cooperation (APEC) meeting in Beijing. Background concentrations in Tianjin and Jinan were then estimated by applying the determined filter parameters and recession constants. The chemical mass balance (CMB) model was also applied to ascertain the effectiveness of the new approach. Our results showed that the contribution of background PM concentration to ambient pollution was at a comparable level to the contribution obtained from the previous study. The best performance was achieved using the Boughton two-parameter algorithm. The background concentrations were estimated at (27 ± 2) μg/m3 for the whole year, (34 ± 4) μg/m3 for the heating period (winter), (21 ± 2) μg/m3 for the non-heating period (summer), and (25 ± 2) μg/m3 for the sandstorm period in Tianjin. The corresponding values in Jinan were (30 ± 3) μg/m3, (40 ± 4) μg/m3, (24 ± 5) μg/m3, and (26 ± 2) μg/m3, respectively. The study revealed that these baseline separation techniques are valid for estimating levels of PM2.5 air pollution, and that our proposed method has great potential for estimating the background level of other air pollutants.

  13. A comparison of two colorimetric assays, based upon Lowry and Bradford techniques, to estimate total protein in soil extracts.

    Science.gov (United States)

    Redmile-Gordon, M A; Armenise, E; White, R P; Hirsch, P R; Goulding, K W T

    2013-12-01

    Soil extracts usually contain large quantities of dissolved humified organic material, typically reflected by high polyphenolic content. Since polyphenols seriously confound quantification of extracted protein, minimising this interference is important to ensure measurements are representative. Although the Bradford colorimetric assay is used routinely in soil science for rapid quantification protein in soil-extracts, it has several limitations. We therefore investigated an alternative colorimetric technique based on the Lowry assay (frequently used to measure protein and humic substances as distinct pools in microbial biofilms). The accuracies of both the Bradford assay and a modified Lowry microplate method were compared in factorial combination. Protein was quantified in soil-extracts (extracted with citrate), including standard additions of model protein (BSA) and polyphenol (Sigma H1675-2). Using the Lowry microplate assay described, no interfering effects of citrate were detected even with concentrations up to 5 times greater than are typically used to extract soil protein. Moreover, the Bradford assay was found to be highly susceptible to two simultaneous and confounding artefacts: 1) the colour development due to added protein was greatly inhibited by polyphenol concentration, and 2) substantial colour development was caused directly by the polyphenol addition. In contrast, the Lowry method enabled distinction between colour development from protein and non-protein origin, providing a more accurate quantitative analysis. These results suggest that the modified-Lowry method is a more suitable measure of extract protein (defined by standard equivalents) because it is less confounded by the high polyphenolic content which is so typical of soil extracts.

  14. Estimation of fracture toughness of Zr 2.5% Nb pressure tube of Pressurised Heavy Water Reactor using cyclic ball indentation technique

    Energy Technology Data Exchange (ETDEWEB)

    Chatterjee, S., E-mail: subrata@barc.gov.in; Panwar, Sanjay; Madhusoodanan, K.; Rama Rao, A.

    2016-08-15

    Highlights: • Measurement of fracture toughness of pressure tube is required for its fitness assessment. • Pressure tube removal from the core consumes large amount of radiation for laboratory test. • A remotely operable In situ Property Measurement System (IProMS) has been designed in house. • Conventional and IProMS tests conducted on pressure tube spool pieces having different mechanical properties. • Correlation has been established between the conventional and IProMS estimated fracture properties. - Abstract: In Pressurised Heavy Water Reactors (PHWRs) fuel bundles are located inside horizontal pressure tubes made up of Zr 2.5 wt% Nb alloy. Pressure tubes undergo degradation during its service life due to high pressure, high temperature and radiation environment. Measurement of mechanical properties of degraded pressure tubes is important for assessing their fitness for further operation. Presently as per safety guidelines imposed by the regulatory body, a few pre-decided pressure tubes are removed from the reactor core at regular intervals during the planned reactor shut down to carry out post irradiation examination (PIE) in a laboratory which consumes lots of man-rem and imposes economic penalties. Hence a system is indeed felt necessary which can carry out experimental trials for measurement of mechanical properties of pressure tubes under in situ conditions. The only way to accomplish this important objective is to develop a system based on an in situ measurement technique. In the field of in situ estimation of properties of materials, cyclic ball indentation is an emerging technique. Presently, commercial systems are available for doing an indentation test either on the outside surface of a component at site or on a test piece in a laboratory. However, these systems cannot be used inside a pressure tube for carrying out ball indentation trials under in situ conditions. Considering the importance of such measurements, an In situ Property

  15. Development and validation of a new technique for estimating a minimum postmortem interval using adult blow fly (Diptera: Calliphoridae) carcass attendance.

    Science.gov (United States)

    Mohr, Rachel M; Tomberlin, Jeffery K

    2015-07-01

    Understanding the onset and duration of adult blow fly activity is critical to accurately estimating the period of insect activity or minimum postmortem interval (minPMI). Few, if any, reliable techniques have been developed and consequently validated for using adult fly activity to determine a minPMI. In this study, adult blow flies (Diptera: Calliphoridae) of Cochliomyia macellaria and Chrysomya rufifacies were collected from swine carcasses in rural central Texas, USA, during summer 2008 and Phormia regina and Calliphora vicina in the winter during 2009 and 2010. Carcass attendance patterns of blow flies were related to species, sex, and oocyte development. Summer-active flies were found to arrive 4-12 h after initial carcass exposure, with both C. macellaria and C. rufifacies arriving within 2 h of one another. Winter-active flies arrived within 48 h of one another. There was significant difference in degree of oocyte development on each of the first 3 days postmortem. These frequency differences allowed a minPMI to be calculated using a binomial analysis. When validated with seven tests using domestic and feral swine and human remains, the technique correctly estimated time of placement in six trials.

  16. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    Science.gov (United States)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  17. Comparison of several measure-correlate-predict models using support vector regression techniques to estimate wind power densities. A case study

    International Nuclear Information System (INIS)

    Díaz, Santiago; Carta, José A.; Matías, José M.

    2017-01-01

    Highlights: • Eight measure-correlate-predict (MCP) models used to estimate the wind power densities (WPDs) at a target site are compared. • Support vector regressions are used as the main prediction techniques in the proposed MCPs. • The most precise MCP uses two sub-models which predict wind speed and air density in an unlinked manner. • The most precise model allows to construct a bivariable (wind speed and air density) WPD probability density function. • MCP models trained to minimise wind speed prediction error do not minimise WPD prediction error. - Abstract: The long-term annual mean wind power density (WPD) is an important indicator of wind as a power source which is usually included in regional wind resource maps as useful prior information to identify potentially attractive sites for the installation of wind projects. In this paper, a comparison is made of eight proposed Measure-Correlate-Predict (MCP) models to estimate the WPDs at a target site. Seven of these models use the Support Vector Regression (SVR) and the eighth the Multiple Linear Regression (MLR) technique, which serves as a basis to compare the performance of the other models. In addition, a wrapper technique with 10-fold cross-validation has been used to select the optimal set of input features for the SVR and MLR models. Some of the eight models were trained to directly estimate the mean hourly WPDs at a target site. Others, however, were firstly trained to estimate the parameters on which the WPD depends (i.e. wind speed and air density) and then, using these parameters, the target site mean hourly WPDs. The explanatory features considered are different combinations of the mean hourly wind speeds, wind directions and air densities recorded in 2014 at ten weather stations in the Canary Archipelago (Spain). The conclusions that can be drawn from the study undertaken include the argument that the most accurate method for the long-term estimation of WPDs requires the execution of a

  18. Preliminary assessment of the potential for using cesium-137 technique to estimate rates of soil erosion on cultivated land in La Victoria I, Camaguey province of cuba

    International Nuclear Information System (INIS)

    Brigido, F.O.; Gandarilla Benitez, J.E.

    1999-01-01

    Despite a growing awareness that erosion on cultivated land in Cuba is a potential hazard to long term productivity, there is still only limited information on the rates involved, particularly long term values. The potential for using the radionuclide Caesium-137 as an environmental tracer to indicate sources of soil erosion on cultivated soils in La Victoria catchment is introduced. Use of Caesium-137 measurements to estimate rates of erosion and deposition is founded on comparison of the Caesium-137 inventories at individual sampling points with a reference inventory representing the local Caesium fallout input and thus the inventory to be expected at the site experiencing neither erosion nor deposition. Two models for converting Caesium-137 measurements to estimates of soil redistribution rates on studied site have been used, the Proportional Model and Mass Balance Model. Using the first one net soil erosion was calculated to be 17.6 t. Ha 1 - .year 1 - . Estimates of soil loss using a Mass Balance Model (Simplified Model 1 and Model 2) were found to be 30.2 and 30.6 t. Ha 1 - .year 1 - ,respectively. Preliminary results suggest that Caesium-137 technique may be of considerable value in assembling data on the rates and spatial distribution of soil loss and a reliable tool for developing of soil conservation program

  19. Phase Velocity Estimation of a Microstrip Line in a Stoichiometric Periodically Domain-Inverted LiTaO3 Modulator Using Electro-Optic Sampling Technique

    Directory of Open Access Journals (Sweden)

    Shintaro Hisatake

    2008-01-01

    Full Text Available We estimate the phase velocity of a modulation microwave in a quasi-velocity-matched (QVM electro-optic (EO phase modulator (QVM-EOM using EO sampling which is accurate and the most reliable technique for measuring voltage waveforms at an electrode. The substrate of the measured QVM-EOM is a stoichiometric periodically domain-inverted LiTaO3 crystal. The electric field of a standing wave in a resonant microstrip line (width: 0.5 mm, height: 0.5 mm is measured by employing a CdTe crystal as an EO sensor. The wavelength of the traveling microwave at 16.0801 GHz is determined as 3.33 mm by fitting the theoretical curve to the measured electric field distribution. The phase velocity is estimated as vm=5.35×107 m/s, though there exists about 5% systematic error due to the perturbation by the EO sensor. Relative dielectric constant of εr=41.5 is led as the maximum likelihood value that derives the estimated phase velocity.

  20. A analytical comparison of different estimators for the density distribution of the catalyst in a experimental riser by a gammametric technique

    International Nuclear Information System (INIS)

    Lima, Emerson Alexandre de Oliveira; Dantas Carlos C.; Melo, Silvio de Barros; Santos, Valdemir Alexandre dos

    2005-01-01

    In this paper, we solve the following questions: what will be the estimate of the r = r (x, y, z) function format? Which method would describe the density distribution function more precisely? Which is the best estimator? Also, once the ρ=ρ(x, y, z) format and the approximation technique is defined, comes the experimental arrangement and pass length configuration, which are the next problems to be solved. Finding the best parameter estimation for the ρ=ρ(x, y, z) function according to the C pass lengths and their spatial configuration. The latter is required to define the ρ=ρ(x, y, z) function resolution and the mechanical scanning movements of the arrangement. Such definitions will be implemented for an automate arrangement, by a computational program, to further development of the reconstruction of catalyst density distribution on experimental risers.. The precision evaluation was finally compared to the arrangement geometry that yields the best pass length spatial configuration. The results are shown in graphics for the two known density distributions. As a conclusion, the parameters for an automate arrangement design, are given under the required precision for the catalyst distribution reconstruction. (author)

  1. Evaluation of Three Evaporation Estimation Techniques In A Semi-Arid Region (Omar El Mukhtar Reservoir Sluge, Libya- As a case Study

    Directory of Open Access Journals (Sweden)

    Lubna s. Ben Taher

    2017-02-01

    Full Text Available In many semi-arid countries in the world like Libya, drinking water supply is dependent on reservoirs water storage. Since the evaporation rate is very high in semi-arid countries, estimates and forecasts of reservoir evaporation rate can be useful in the management of major water source. Many researchers have been investigating the suitability of estimates evaporation rates methods in many climatic settings, infrequently of which were in an arid setting. This paper presents the modeling results of evaporation from Omar El Mukhtar Reservoir, Libya. Three techniques namely (artificial neural networks (ANN, Multiple linear regression (MLR and response surface methods (RSM were developed, to assess the estimation of monthly evaporation records from 2001 to 2009; their relative performance were compared using the coefficient of determination(E, mean absolute percentage error (MAPE%, and 95% confidence interval. The key variables used to develop and validate the models were: monthly (precipitation Rf., average temperature Temp., relative humidity Rh., sunshine hours Sh., atmospheric pressure Pa. and wind speed Ws.. The encouraging results approved that the models with more inputs generally had better accuracies and the ANN model performed superior to the other models in predicting monthly Evp with high E=0.86 and low MAPE%= 13.9 and the predicted mean within the range of observed 95CI%. In summary, it is revealed in this study that the ANN and RSM models are appropriate for predicting Evp using climatic inputs in semi-arid climate.

  2. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  3. Higher Order Numerical Methods and Use of Estimation Techniques to Improve Modeling of Two-Phase Flow in Pipelines and Wells

    Energy Technology Data Exchange (ETDEWEB)

    Lorentzen, Rolf Johan

    2002-04-01

    The main objective of this thesis is to develop methods which can be used to improve predictions of two-phase flow (liquid and gas) in pipelines and wells. More reliable predictions are accomplished by improvements of numerical methods, and by using measured data to tune the mathematical model which describes the two-phase flow. We present a way to extend simple numerical methods to second order spatial accuracy. These methods are implemented, tested and compared with a second order Godunov-type scheme. In addition, a new (and faster) version of the Godunov-type scheme utilizing primitive (observable) variables is presented. We introduce a least squares method which is used to tune parameters embedded in the two-phase flow model. This method is tested using synthetic generated measurements. We also present an ensemble Kalman filter which is used to tune physical state variables and model parameters. This technique is tested on synthetic generated measurements, but also on several sets of full-scale experimental measurements. The thesis is divided into an introductory part, and a part consisting of four papers. The introduction serves both as a summary of the material treated in the papers, and as supplementary background material. It contains five sections, where the first gives an overview of the main topics which are addressed in the thesis. Section 2 contains a description and discussion of mathematical models for two-phase flow in pipelines. Section 3 deals with the numerical methods which are used to solve the equations arising from the two-phase flow model. The numerical scheme described in Section 3.5 is not included in the papers. This section includes results in addition to an outline of the numerical approach. Section 4 gives an introduction to estimation theory, and leads towards application of the two-phase flow model. The material in Sections 4.6 and 4.7 is not discussed in the papers, but is included in the thesis as it gives an important validation

  4. The feasibility of a scanner-independent technique to estimate organ dose from MDCT scans: Using CTDIvol to account for differences between scanners

    International Nuclear Information System (INIS)

    Turner, Adam C.; Zankl, Maria; DeMarco, John J.; Cagnon, Chris H.; Zhang Di; Angel, Erin; Cody, Dianna D.; Stevens, Donna M.; McCollough, Cynthia H.; McNitt-Gray, Michael F.

    2010-01-01

    Purpose: Monte Carlo radiation transport techniques have made it possible to accurately estimate the radiation dose to radiosensitive organs in patient models from scans performed with modern multidetector row computed tomography (MDCT) scanners. However, there is considerable variation in organ doses across scanners, even when similar acquisition conditions are used. The purpose of this study was to investigate the feasibility of a technique to estimate organ doses that would be scanner independent. This was accomplished by assessing the ability of CTDI vol measurements to account for differences in MDCT scanners that lead to organ dose differences. Methods: Monte Carlo simulations of 64-slice MDCT scanners from each of the four major manufacturers were performed. An adult female patient model from the GSF family of voxelized phantoms was used in which all ICRP Publication 103 radiosensitive organs were identified. A 120 kVp, full-body helical scan with a pitch of 1 was simulated for each scanner using similar scan protocols across scanners. From each simulated scan, the radiation dose to each organ was obtained on a per mA s basis (mGy/mA s). In addition, CTDI vol values were obtained from each scanner for the selected scan parameters. Then, to demonstrate the feasibility of generating organ dose estimates from scanner-independent coefficients, the simulated organ dose values resulting from each scanner were normalized by the CTDI vol value for those acquisition conditions. Results: CTDI vol values across scanners showed considerable variation as the coefficient of variation (CoV) across scanners was 34.1%. The simulated patient scans also demonstrated considerable differences in organ dose values, which varied by up to a factor of approximately 2 between some of the scanners. The CoV across scanners for the simulated organ doses ranged from 26.7% (for the adrenals) to 37.7% (for the thyroid), with a mean CoV of 31.5% across all organs. However, when organ doses

  5. Identification of Combined Power Quality Disturbances Using Singular Value Decomposition (SVD and Total Least Squares-Estimation of Signal Parameters via Rotational Invariance Techniques (TLS-ESPRIT

    Directory of Open Access Journals (Sweden)

    Huaishuo Xiao

    2017-11-01

    Full Text Available In order to identify various kinds of combined power quality disturbances, the singular value decomposition (SVD and the improved total least squares-estimation of signal parameters via rotational invariance techniques (TLS-ESPRIT are combined as the basis of disturbance identification in this paper. SVD is applied to identify the catastrophe points of disturbance intervals, based on which the disturbance intervals are segmented. Then the improved TLS-ESPRIT optimized by singular value norm method is used to analyze each data segment, and extract the amplitude, frequency, attenuation coefficient and initial phase of various kinds of disturbances. Multi-group combined disturbance test signals are constructed by MATLAB and the proposed method is also tested by the measured data of IEEE Power and Energy Society (PES Database. The test results show that the new method proposed has a relatively higher accuracy than conventional TLS-ESPRIT, which could be used in the identification of measured data.

  6. Method for estimating potential wetland extent by utilizing streamflow statistics and flood-inundation mapping techniques: Pilot study for land along the Wabash River near Terre Haute, Indiana

    Science.gov (United States)

    Kim, Moon H.; Ritz, Christian T.; Arvin, Donald V.

    2012-01-01

    Potential wetland extents were estimated for a 14-mile reach of the Wabash River near Terre Haute, Indiana. This pilot study was completed by the U.S. Geological Survey in cooperation with the U.S. Department of Agriculture, Natural Resources Conservation Service (NRCS). The study showed that potential wetland extents can be estimated by analyzing streamflow statistics with the available streamgage data, calculating the approximate water-surface elevation along the river, and generating maps by use of flood-inundation mapping techniques. Planning successful restorations for Wetland Reserve Program (WRP) easements requires a determination of areas that show evidence of being in a zone prone to sustained or frequent flooding. Zone determinations of this type are used by WRP planners to define the actively inundated area and make decisions on restoration-practice installation. According to WRP planning guidelines, a site needs to show evidence of being in an "inundation zone" that is prone to sustained or frequent flooding for a period of 7 consecutive days at least once every 2 years on average in order to meet the planning criteria for determining a wetland for a restoration in agricultural land. By calculating the annual highest 7-consecutive-day mean discharge with a 2-year recurrence interval (7MQ2) at a streamgage on the basis of available streamflow data, one can determine the water-surface elevation corresponding to the calculated flow that defines the estimated inundation zone along the river. By using the estimated water-surface elevation ("inundation elevation") along the river, an approximate extent of potential wetland for a restoration in agricultural land can be mapped. As part of the pilot study, a set of maps representing the estimated potential wetland extents was generated in a geographic information system (GIS) application by combining (1) a digital water-surface plane representing the surface of inundation elevation that sloped in the downstream

  7. A technique for estimating the probability of radiation-stimulated failures of integrated microcircuits in low-intensity radiation fields: Application to the Spektr-R spacecraft

    Science.gov (United States)

    Popov, V. D.; Khamidullina, N. M.

    2006-10-01

    In developing radio-electronic devices (RED) of spacecraft operating in the fields of ionizing radiation in space, one of the most important problems is the correct estimation of their radiation tolerance. The “weakest link” in the element base of onboard microelectronic devices under radiation effect is the integrated microcircuits (IMC), especially of large scale (LSI) and very large scale (VLSI) degree of integration. The main characteristic of IMC, which is taken into account when making decisions on using some particular type of IMC in the onboard RED, is the probability of non-failure operation (NFO) at the end of the spacecraft’s lifetime. It should be noted that, until now, the NFO has been calculated only from the reliability characteristics, disregarding the radiation effect. This paper presents the so-called “reliability” approach to determination of radiation tolerance of IMC, which allows one to estimate the probability of non-failure operation of various types of IMC with due account of radiation-stimulated dose failures. The described technique is applied to RED onboard the Spektr-R spacecraft to be launched in 2007.

  8. Frequencies of X-ray induced chromosome aberrations in lymphocytes of xeroderma pigmentosum and Fanconi anemia patients estimated by Giemsa and fluorescence in situ hybridization staining techniques

    Directory of Open Access Journals (Sweden)

    Saraswathy Radha

    2000-01-01

    Full Text Available Blood lymphocytes from xeroderma pigmentosum (XP and Fanconi anemia (FA patients were assessed for their sensitivity to ionizing radiation by estimating the frequency of X-ray (1 and 2 Gy-induced chromosome aberrations (CA. The frequencies of aberrations in the whole genome were estimated in Giemsa-stained preparations of lymphocytes irradiated at G0 or G2 stages. The frequencies of translocations and dicentrics involving chromosomes 1 and 3 as well as the X-chromosome were determined in slides stained by fluorescence in situ hybridization (FISH technique. An increase in all types of CA was observed in XP and FA lymphocytes irradiated at G0 when compared to controls. The frequency of dicentrics and rings was 6 to 27% higher (at 1 and 2 Gy in XP lymphocytes and 37% higher (at 2 Gy in FA lymphocytes than in controls, while chromosome deletions were higher in irradiated (30% in 1 Gy and 72% in 2 Gy than in control XP lymphocytes and 28 to 102% higher in FA lymphocytes. In G2-irradiated lymphocytes the frequency of CA was 24 to 55% higher in XP lymphocytes than in controls. In most cases the translocation frequencies were higher than the frequencies of dicentrics (21/19.

  9. The influence of the microbial quality of wastewater, lettuce cultivars and enumeration technique when estimating the microbial contamination of wastewater-irrigated lettuce.

    Science.gov (United States)

    Makkaew, P; Miller, M; Cromar, N J; Fallowfield, H J

    2017-04-01

    This study investigated the volume of wastewater retained on the surface of three different varieties of lettuce, Iceberg, Cos, and Oak leaf, following submersion in wastewater of different microbial qualities (10, 10 2 , 10 3 , and 10 4 E. coli MPN/100 mL) as a surrogate method for estimation of contamination of spray-irrigated lettuce. Uniquely, Escherichia coli was enumerated, after submersion, on both the outer and inner leaves and in a composite sample of lettuce. E. coli were enumerated using two techniques. Firstly, from samples of leaves - the direct method. Secondly, using an indirect method, where the E. coli concentrations were estimated from the volume of wastewater retained by the lettuce and the E. coli concentration of the wastewater. The results showed that different varieties of lettuce retained significantly different volumes of wastewater (p 0.01) were detected between E. coli counts obtained from different parts of lettuce, nor between the direct and indirect enumeration methods. Statistically significant linear relationships were derived relating the E. coli concentration of the wastewater in which the lettuces were submerged to the subsequent E. coli count on each variety the lettuce.

  10. Estimation of the heat generation in vitrified waste product and shield thickness of the cask for the transportation of vitrified waste product using Monte Carlo technique

    International Nuclear Information System (INIS)

    Deepa, A.K.; Jakhete, A.P.; Mehta, D.; Kaushik, C.P.

    2011-01-01

    High Level Liquid waste (HLW) generated during reprocessing of spent fuel contains most of the radioactivity present in the spent fuel resulting in the need for isolation and surveillance for extended period of time. Major components in HLW are the corrosion products, fission products such as 137 Cs, 90 Sr, 106 Ru, 144 Ce, 125 Sb etc, actinides and various chemicals used during reprocessing of spent fuel. Fresh HLW having an activity concentration of around 100Ci/l is to be vitrified into borosilicate glass and packed in canisters which are placed in S.S overpacks for better confinement. These overpacks contain around 0.7 Million Curies of activity. Characterisation of activity in HLW and activity profile of radionuclides for various cooling periods sets the base for the study. For transporting the vitrified waste product (VWP), two most important parameters is the shield thickness of the transportation cask and the heat generation in the waste product. This paper describes the methodology used in the estimation of lead thickness for the transportation cask using the Monte Carlo Technique. Heat generation due to decay of fission products results in the increase in temperature of the vitrified waste product during interim storage and disposal. Glass being the material, not having very high thermal conductivity, temperature difference between the canister and surrounding bears significance in view of the possibility of temperature based devitrification of VWP. The heat generation in the canister and the overpack containing vitrified glass is also estimated using MCNP. (author)

  11. Estimation of microbial protein supply of lactating dairy cows under smallholder farms in north-east Thailand using urinary purine derivative technique

    International Nuclear Information System (INIS)

    Pimpa, O.; Liang, J.B.

    2004-01-01

    Two experiments were conducted to examine the potential of urinary purine derivative (PD) as a predictive index of microbial protein supply in ruminant livestock under farm conditions. Results of Experiment 1 indicated that diurnal variation in the PDC index ( [mmol/L PD]/[mmol/L creatinine] kgW 0.75 ) in spot urine samples of zebu cattle was small and highly correlated with the daily PD output, suggesting that spot urine samples could be used to derive an index for estimating microbial protein supply of cattle under farm conditions. However, the PDC index for buffaloes was poorly correlated to daily urinary PD output, therefore the use of spot urine samples appeared to be unsuitable for buffaloes. Based on the above results, spot urine samples were used to estimate the microbial protein supply of lactating dairy cows under farm conditions in a follow-up experiment. The study was conducted using 24 lactating cows in 6 smallholder dairy farms situated in Khon Kaen province of Northeast Thailand. The study was conducted over two climatic seasons (raining and dry), where the animals were fed 5 kg of farm-mixed concentrate feed supplemented either with green grass (cut or grazing) or rice straw as roughage source during the raining and dry seasons, respectively. The results indicated that microbial protein supply was not significantly different and therefore, the nutritional status of the lactating cows was not significantly different between the two seasons. The absence of differences in milk yield between seasons seems to support our findings. We conclude that urinary PD technique could be used to estimate rumen microbial protein production for dairy cattle under farm conditions. (author)

  12. Accuracy and Feasibility of Estimated Tumour Volumetry in Primary Gastric Gastrointestinal Stromal Tumours: Validation Using Semi-automated Technique in 127 Patients

    Science.gov (United States)

    Tirumani, Sree Harsha; Shinagare, Atul B.; O’Neill, Ailbhe C.; Nishino, Mizuki; Rosenthal, Michael H.; Ramaiya, Nikhil H.

    2015-01-01

    Objective To validate estimated tumour volumetry in primary gastric gastrointestinal stromal tumours (GISTs) using semi-automated volumetry. Materials and Methods In this IRB-approved retrospective study, we measured the three longest diameters in x, y, z axes on CTs of primary gastric GISTs in 127 consecutive patients (52 women, 75 men, mean age: 61 years) at our institute between 2000 and 2013. Segmented volumes (Vsegmented) were obtained using commercial software by two radiologists. Estimate volumes (V1–V6) were obtained using formulae for spheres and ellipsoids. Intra- and inter-observer agreement of Vsegmented and agreement of V1–6 with Vsegmented were analysed with concordance correlation coefficients (CCC) and Bland-Altman plots. Results Median Vsegmented and V1–V6 were 75.9 cm3, 124.9 cm3, 111.6 cm3, 94.0 cm3, 94.4cm3, 61.7 cm3 and 80.3 cm3 respectively. There was strong intra- and inter-observer agreement for Vsegmented. Agreement with Vsegmented was highest for V6 (scalene ellipsoid, x≠y≠z), with CCC of 0.96 [95%CI: 0.95–0.97]. Mean relative difference was smallest for V6 (0.6%), while it was −19.1% for V5, +14.5% for V4, +17.9% for V3, +32.6 % for V2 and +47% for V1. Conclusion Ellipsoidal approximations of volume using three measured axes may be used to closely estimate Vsegmented when semi-automated techniques are unavailable. PMID:25991487

  13. In vitro digestion and DGT techniques for estimating cadmium and lead bioavailability in contaminated soils: Influence of gastric juice pH

    International Nuclear Information System (INIS)

    Pelfrene, Aurelie; Waterlot, Christophe; Douay, Francis

    2011-01-01

    A sensitivity analysis was conducted on an in vitro gastrointestinal digestion test (i) to investigate the influence of a low variation of gastric juice pH on the bioaccessibility of Cd and Pb in smelter-contaminated soils (F B , using the unified bioaccessibility method UBM) and fractions of metals that may be transported across the intestinal epithelium (F A , using the diffusive gradient in thin film technique), and (ii) to provide a better understanding of the significance of pH in health risk assessment through ingestion of soil by children. The risk of metal exposure to children (hazard quotient, HQ) was determined for conditions that represent a worst-case scenario (i.e., ingestion rate of 200 mg day -1 ) using three separate calculations of metal daily intake: estimated daily intake (EDI), bioaccessible EDI (EDI-F B ), and oral bioavailable EDI (EDI-F A ). The increasing pH from 1.2 to 1.7 resulted in: (i) no significant variation in Cd-F B in the gastric phase but a decrease in the gastrointestinal phase; (ii) a decrease in soluble Pb in the gastric phase and a significant variation in Pb-F B in the gastrointestinal phase; (iii) a significant decrease in Cd-F A and no variation in Pb-F A ; (iv) no change in EDI-F B and EDI-F A HQs for Cd; (v) a significant decrease in EDI-F B HQs and no significant variation in EDI-F A HQ for Pb. In the analytical conditions, these results show that risk to children decreases when the bioavailability of Pb in soils is taken into account and that the studied pH values do not affect the EDI-F A HQs. The present results provide evidence that the inclusion of bioavailability analysis during health risk assessment could provide a more realistic estimate of Cd and Pb exposure, and opens a wide field of practical research on this topic (e.g., in contaminated site management). - Highlights: → Sensitivity analysis on an in vitro gastrointestinal digestion test. → Influence of gastric juice pH on metal bioaccessibility

  14. In vitro digestion and DGT techniques for estimating cadmium and lead bioavailability in contaminated soils: Influence of gastric juice pH

    Energy Technology Data Exchange (ETDEWEB)

    Pelfrene, Aurelie, E-mail: aurelie.pelfrene@isa-lille.fr [Universite Lille Nord de France, Lille (France); Groupe ISA, Equipe Sols et Environnement, Laboratoire Genie Civil et geo-Environnement (LGCgE) Lille Nord de France (EA 4515), 48 boulevard Vauban, 59046 Lille cedex (France); Waterlot, Christophe; Douay, Francis [Universite Lille Nord de France, Lille (France); Groupe ISA, Equipe Sols et Environnement, Laboratoire Genie Civil et geo-Environnement (LGCgE) Lille Nord de France (EA 4515), 48 boulevard Vauban, 59046 Lille cedex (France)

    2011-11-01

    A sensitivity analysis was conducted on an in vitro gastrointestinal digestion test (i) to investigate the influence of a low variation of gastric juice pH on the bioaccessibility of Cd and Pb in smelter-contaminated soils (F{sub B}, using the unified bioaccessibility method UBM) and fractions of metals that may be transported across the intestinal epithelium (F{sub A}, using the diffusive gradient in thin film technique), and (ii) to provide a better understanding of the significance of pH in health risk assessment through ingestion of soil by children. The risk of metal exposure to children (hazard quotient, HQ) was determined for conditions that represent a worst-case scenario (i.e., ingestion rate of 200 mg day{sup -1}) using three separate calculations of metal daily intake: estimated daily intake (EDI), bioaccessible EDI (EDI-F{sub B}), and oral bioavailable EDI (EDI-F{sub A}). The increasing pH from 1.2 to 1.7 resulted in: (i) no significant variation in Cd-F{sub B} in the gastric phase but a decrease in the gastrointestinal phase; (ii) a decrease in soluble Pb in the gastric phase and a significant variation in Pb-F{sub B} in the gastrointestinal phase; (iii) a significant decrease in Cd-F{sub A} and no variation in Pb-F{sub A}; (iv) no change in EDI-F{sub B} and EDI-F{sub A} HQs for Cd; (v) a significant decrease in EDI-F{sub B} HQs and no significant variation in EDI-F{sub A} HQ for Pb. In the analytical conditions, these results show that risk to children decreases when the bioavailability of Pb in soils is taken into account and that the studied pH values do not affect the EDI-F{sub A} HQs. The present results provide evidence that the inclusion of bioavailability analysis during health risk assessment could provide a more realistic estimate of Cd and Pb exposure, and opens a wide field of practical research on this topic (e.g., in contaminated site management). - Highlights: {yields} Sensitivity analysis on an in vitro gastrointestinal digestion test

  15. Estimation of net ecosystem metabolism of seagrass meadows in the coastal waters of the East Sea and Black Sea using the noninvasive eddy covariance technique

    Science.gov (United States)

    Lee, Jae Seong; Kang, Dong-Jin; Hineva, Elitsa; Slabakova, Violeta; Todorova, Valentina; Park, Jiyoung; Cho, Jin-Hyung

    2017-06-01

    We measured the community-scale metabolism of seagrass meadows in Bulgaria (Byala [BY]) and Korea (Hoopo Bay [HP]) to understand their ecosystem function in coastal waters. A noninvasive in situ eddy covariance technique was applied to estimate net O2 flux in the seagrass meadows. From the high-quality and high-resolution time series O2 data acquired over > 24 h, the O2 flux driven by turbulence was extracted at 15-min intervals. The spectrum analysis of vertical flow velocity and O2 concentration clearly showed well-developed turbulence characteristics in the inertial subrange region. The hourly averaged net O2 fluxes per day ranged from -474 to 326 mmol O2 m-2 d-1 (-19 ± 41 mmol O2 m-2 d-1) at BY and from -74 to 482 mmol O2 m-2 d-1 (31 ± 17 mmol O2 m-2 d-1) at HP. The net O2 production rapidly responded to photosynthetically available radiation (PAR) and showed a good relationship between production and irradiance (P-I curve). The hysteresis pattern of P-I relationships during daytime also suggested increasing heterotrophic respiration in the afternoon. With the flow velocity between 3.30 and 6.70 cm s-1, the community metabolism during daytime and nighttime was significantly increased by 20 times and 5 times, respectively. The local hydrodynamic characteristics may be vital to determining the efficiency of community photosynthesis. The net ecosystem metabolism at BY was estimated to be -17 mmol O2 m-2 d-1, which was assessed as heterotrophy. However, that at HP was 36 mmol O2 m-2 d-1, which suggested an autotrophic state.

  16. A biomechanical modeling-guided simultaneous motion estimation and image reconstruction technique (SMEIR-Bio) for 4D-CBCT reconstruction

    Science.gov (United States)

    Huang, Xiaokun; Zhang, You; Wang, Jing

    2018-02-01

    Reconstructing four-dimensional cone-beam computed tomography (4D-CBCT) images directly from respiratory phase-sorted traditional 3D-CBCT projections can capture target motion trajectory, reduce motion artifacts, and reduce imaging dose and time. However, the limited numbers of projections in each phase after phase-sorting decreases CBCT image quality under traditional reconstruction techniques. To address this problem, we developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, an iterative method that can reconstruct higher quality 4D-CBCT images from limited projections using an inter-phase intensity-driven motion model. However, the accuracy of the intensity-driven motion model is limited in regions with fine details whose quality is degraded due to insufficient projection number, which consequently degrades the reconstructed image quality in corresponding regions. In this study, we developed a new 4D-CBCT reconstruction algorithm by introducing biomechanical modeling into SMEIR (SMEIR-Bio) to boost the accuracy of the motion model in regions with small fine structures. The biomechanical modeling uses tetrahedral meshes to model organs of interest and solves internal organ motion using tissue elasticity parameters and mesh boundary conditions. This physics-driven approach enhances the accuracy of solved motion in the organ’s fine structures regions. This study used 11 lung patient cases to evaluate the performance of SMEIR-Bio, making both qualitative and quantitative comparisons between SMEIR-Bio, SMEIR, and the algebraic reconstruction technique with total variation regularization (ART-TV). The reconstruction results suggest that SMEIR-Bio improves the motion model’s accuracy in regions containing small fine details, which consequently enhances the accuracy and quality of the reconstructed 4D-CBCT images.

  17. Genomic prediction using different estimation methodology, blending and cross-validation techniques for growth traits and visual scores in Hereford and Braford cattle.

    Science.gov (United States)

    Campos, G S; Reimann, F A; Cardoso, L L; Ferreira, C E R; Junqueira, V S; Schmidt, P I; Braccini Neto, J; Yokoo, M J I; Sollero, B P; Boligon, A A; Cardoso, F F

    2018-05-07

    The objective of the present study was to evaluate the accuracy and bias of direct and blended genomic predictions using different methods and cross-validation techniques for growth traits (weight and weight gains) and visual scores (conformation, precocity, muscling and size) obtained at weaning and at yearling in Hereford and Braford breeds. Phenotypic data contained 126,290 animals belonging to the Delta G Connection genetic improvement program, and a set of 3,545 animals genotyped with the 50K chip and 131 sires with the 777K. After quality control, 41,045 markers remained for all animals. An animal model was used to estimate (co)variances components and to predict breeding values, which were later used to calculate the deregressed estimated breeding values (DEBV). Animals with genotype and phenotype for the traits studied were divided into four or five groups by random and k-means clustering cross-validation strategies. The values of accuracy of the direct genomic values (DGV) were moderate to high magnitude for at weaning and at yearling traits, ranging from 0.19 to 0.45 for the k-means and 0.23 to 0.78 for random clustering among all traits. The greatest gain in relation to the pedigree BLUP (PBLUP) was 9.5% with the BayesB method with both the k-means and the random clustering. Blended genomic value accuracies ranged from 0.19 to 0.56 for k-means and from 0.21 to 0.82 for random clustering. The analyzes using the historical pedigree and phenotypes contributed additional information to calculate the GEBV and in general, the largest gains were for the single-step (ssGBLUP) method in bivariate analyses with a mean increase of 43.00% among all traits measured at weaning and of 46.27% for those evaluated at yearling. The accuracy values for the marker effects estimation methods were lower for k-means clustering, indicating that the training set relationship to the selection candidates is a major factor affecting accuracy of genomic predictions. The gains in

  18. A comparative analysis of spectral exponent estimation techniques for 1/f(β) processes with applications to the analysis of stride interval time series.

    Science.gov (United States)

    Schaefer, Alexander; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin

    2014-01-30

    The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f)=1/f(β). The scaling exponent β is thus often interpreted as a "biomarker" of relative health and decline. This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Estimation of Th, Cs, Sr, I, Co, Fe, Zn, Ca and K in major food components using neutron activation analysis technique

    International Nuclear Information System (INIS)

    Nair, Suma; Bhati, Sharda

    2010-01-01

    The concentration of some radiologically and nutritionally important trace elements: Th, Cs, Sr, I, Co, Fe, Zn, Ca and K were determined in major food components such as cereals, pulses, vegetables, fruits, milk etc. The trace elements in food samples were determined using neutron activation analysis technique which involves instrumental and radiochemical neutron activation analysis. Whereas, the trace elements Th, Cs, K and Sr, are important in radiation protection; Fe and Zn are of importance in nutrition studies and Ca and I have dual importance, both in nutrition and radiation protection. The results of analysis show that among the food materials, higher concentrations of Th, Cs, Sr, K, Fe, Zn and Co were found in cereals and pulses. In case of Ca, the milk appears to be the main contributor towards its dietary intake. The estimated concentrations of the trace elements in food components can be employed in determining the daily dietary intake of these elements which in turn can be used for their biokinetic studies. (author)

  20. Examination of forensic entomology evidence using computed tomography scanning: case studies and refinement of techniques for estimating maggot mass volumes in bodies.

    Science.gov (United States)

    Johnson, Aidan; Archer, Melanie; Leigh-Shaw, Lyndie; Pais, Mike; O'Donnell, Chris; Wallman, James

    2012-09-01

    A new technique has recently been developed for estimating the volume of maggot masses on deceased persons using post-mortem CT scans. This allows volume to be measured non-invasively and factored into maggot mass temperature calculations for both casework and research. Examination of admission scans also allows exploration of entomological evidence in anatomical areas not usually exposed by autopsy (e.g. nasal cavities and facial sinuses), and before autopsy disrupts the maggot distribution on a body. This paper expands on work already completed by providing the x-ray attenuation coefficient by way of Hounsfield unit (HU) values for various maggot species, maggot masses and human tissue adjacent to masses. Specifically, this study looked at the HU values for four forensically important blowfly larvae: Lucilia cuprina, L. sericata, Calliphora stygia and C. vicina. The Calliphora species had significantly lower HU values than the Lucilia species. This might be explained by histological analysis, which revealed a non-significant trend, suggesting that Calliphora maggots have a higher fat content than the Lucilia maggots. It is apparent that the variation in the x-ray attenuation coefficient usually precludes its use as a tool for delineating the maggot mass from human tissue and that morphology is the dominant method for delineating a mass. This paper also includes three case studies, which reveal different applications for interpreting entomological evidence using post-mortem CT scans.

  1. Sedimentation rate and chronology of As and Zn in sediment of a recent former tin mining lake estimated using Pb-210 dating technique

    International Nuclear Information System (INIS)

    Zaharidah Abu Bakar; Ahmad Saat; Zaini Hamzah; Abdul Khalik Wood; Zaharudin Ahmad

    2011-01-01

    Sedimentation in lake occurred through run-off from the land surface and settles on the bottom lake. Past mining activities might enhance sedimentation process in the former tin mining lakes either through natural or human activities. Former tin mining lakes were suspected to have high sedimentation rate due undisturbed environment for almost 50 years. To estimate sedimentation rate and metals contamination in this lake, Pb-210 dating technique was used. Two sediments cores were sampled using gravity corer from a former tin mining lake then analyzed using alpha-spectrometry and Neutron Activation Analysis (NAA). From this study, the results showed the sedimentation rate for sediment cores S1 and S2 are 0.26 cm y -1 and 0.23 cmy -1 respectively. According to sediment chronological sequences, high concentrations of As and Zn in the upper layer indicated that human activities contributed to these metals contamination in the lake sediment. Sedimentation rate and metals contamination possibly due to recent anthropogenic activities around the lake such as human settlement, farming and agricultures activities since the ceased of mining activities a few decades ago. (author)

  2. To what degree does the missing-data technique influence the estimated growth in learning strategies over time? A tutorial example of sensitivity analysis for longitudinal data.

    Science.gov (United States)

    Coertjens, Liesje; Donche, Vincent; De Maeyer, Sven; Vanthournout, Gert; Van Petegem, Peter

    2017-01-01

    Longitudinal data is almost always burdened with missing data. However, in educational and psychological research, there is a large discrepancy between methodological suggestions and research practice. The former suggests applying sensitivity analysis in order to the robustness of the results in terms of varying assumptions regarding the mechanism generating the missing data. However, in research practice, participants with missing data are usually discarded by relying on listwise deletion. To help bridge the gap between methodological recommendations and applied research in the educational and psychological domain, this study provides a tutorial example of sensitivity analysis for latent growth analysis. The example data concern students' changes in learning strategies during higher education. One cohort of students in a Belgian university college was asked to complete the Inventory of Learning Styles-Short Version, in three measurement waves. A substantial number of students did not participate on each occasion. Change over time in student learning strategies was assessed using eight missing data techniques, which assume different mechanisms for missingness. The results indicated that, for some learning strategy subscales, growth estimates differed between the models. Guidelines in terms of reporting the results from sensitivity analysis are synthesised and applied to the results from the tutorial example.

  3. Satellite precipitation estimation over the Tibetan Plateau

    Science.gov (United States)

    Porcu, F.; Gjoka, U.

    2012-04-01

    Precipitation characteristics over the Tibetan Plateau are very little known, given the scarcity of reliable and widely distributed ground observation, thus the satellite approach is a valuable choice for large scale precipitation analysis and hydrological cycle studies. However,the satellite perspective undergoes various shortcomings at the different wavelengths used in atmospheric remote sensing. In the microwave spectrum often the high soil emissivity masks or hides the atmospheric signal upwelling from light-moderate precipitation layers, while low and relatively thin precipitating clouds are not well detected in the visible-infrared, because of their low contrast with cold and bright (if snow covered) background. In this work an IR-based, statistical rainfall estimation technique is trained and applied over the Tibetan Plateau hydrological basin to retrive precipitation intensity at different spatial and temporal scales. The technique is based on a simple artificial neural network scheme trained with two supervised training sets assembled for monsoon season and for the rest of the year. For the monsoon season (estimated from June to September), the ground radar precipitation data for few case studies are used to build the training set: four days in summer 2009 are considered. For the rest of the year, CloudSat-CPR derived snowfall rate has been used as reference precipitation data, following the Kulie and Bennartz (2009) algorithm. METEOSAT-7 infrared channels radiance (at 6.7 and 11 micometers) and derived local variability features (such as local standard deviation and local average) are used as input and the actual rainrate is obtained as output for each satellite slot, every 30 minutes on the satellite grid. The satellite rainrate maps for three years (2008-2010) are computed and compared with available global precipitation products (such as C-MORPH and TMPA products) and with other techniques applied to the Plateau area: similarities and differences are

  4. Estimation of organ-absorbed radiation doses during 64-detector CT coronary angiography using different acquisition techniques and heart rates: a phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Matsubara, Kosuke; Koshida, Kichiro; Kawashima, Hiroko (Dept. of Quantum Medical Technology, Faculty of Health Sciences, Kanazawa Univ., Kanazawa (Japan)), email: matsuk@mhs.mp.kanazawa-u.ac.jp; Noto, Kimiya; Takata, Tadanori; Yamamoto, Tomoyuki (Dept. of Radiological Technology, Kanazawa Univ. Hospital, Kanazawa (Japan)); Shimono, Tetsunori (Dept. of Radiology, Hoshigaoka Koseinenkin Hospital, Hirakata (Japan)); Matsui, Osamu (Dept. of Radiology, Faculty of Medicine, Kanazawa Univ., Kanazawa (Japan))

    2011-07-15

    Background: Though appropriate image acquisition parameters allow an effective dose below 1 mSv for CT coronary angiography (CTCA) performed with the latest dual-source CT scanners, a single-source 64-detector CT procedure results in a significant radiation dose due to its technical limitations. Therefore, estimating the radiation doses absorbed by an organ during 64-detector CTCA is important. Purpose: To estimate the radiation doses absorbed by organs located in the chest region during 64-detector CTCA using different acquisition techniques and heart rates. Material and Methods: Absorbed doses for breast, heart, lung, red bone marrow, thymus, and skin were evaluated using an anthropomorphic phantom and radiophotoluminescence glass dosimeters (RPLDs). Electrocardiogram (ECG)-gated helical and ECG-triggered non-helical acquisitions were performed by applying a simulated heart rate of 60 beats per minute (bpm) and ECG-gated helical acquisitions using ECG modulation (ECGM) of the tube current were performed by applying simulated heart rates of 40, 60, and 90 bpm after placing RPLDs on the anatomic location of each organ. The absorbed dose for each organ was calculated by multiplying the calibrated mean dose values of RPLDs with the mass energy coefficient ratio. Results: For all acquisitions, the highest absorbed dose was observed for the heart. When the helical and non-helical acquisitions were performed by applying a simulated heart rate of 60 bpm, the absorbed doses for heart were 215.5, 202.2, and 66.8 mGy for helical, helical with ECGM, and non-helical acquisitions, respectively. When the helical acquisitions using ECGM were performed by applying simulated heart rates of 40, 60, and 90 bpm, the absorbed doses for heart were 178.6, 139.1, and 159.3 mGy, respectively. Conclusion: ECG-triggered non-helical acquisition is recommended to reduce the radiation dose. Also, controlling the patients' heart rate appropriately during ECG-gated helical acquisition with

  5. Estimating temporal and spatial variation of ocean surface pCO2 in the North Pacific using a self-organizing map neural network technique

    Directory of Open Access Journals (Sweden)

    S. Nakaoka

    2013-09-01

    Full Text Available This study uses a neural network technique to produce maps of the partial pressure of oceanic carbon dioxide (pCO2sea in the North Pacific on a 0.25° latitude × 0.25° longitude grid from 2002 to 2008. The pCO2sea distribution was computed using a self-organizing map (SOM originally utilized to map the pCO2sea in the North Atlantic. Four proxy parameters – sea surface temperature (SST, mixed layer depth, chlorophyll a concentration, and sea surface salinity (SSS – are used during the training phase to enable the network to resolve the nonlinear relationships between the pCO2sea distribution and biogeochemistry of the basin. The observed pCO2sea data were obtained from an extensive dataset generated by the volunteer observation ship program operated by the National Institute for Environmental Studies (NIES. The reconstructed pCO2sea values agreed well with the pCO2sea measurements, with the root-mean-square error ranging from 17.6 μatm (for the NIES dataset used in the SOM to 20.2 μatm (for independent dataset. We confirmed that the pCO2sea estimates could be improved by including SSS as one of the training parameters and by taking into account secular increases of pCO2sea that have tracked increases in atmospheric CO2. Estimated pCO2sea values accurately reproduced pCO2sea data at several time series locations in the North Pacific. The distributions of pCO2sea revealed by 7 yr averaged monthly pCO2sea maps were similar to Lamont-Doherty Earth Observatory pCO2sea climatology, allowing, however, for a more detailed analysis of biogeochemical conditions. The distributions of pCO2sea anomalies over the North Pacific during the winter clearly showed regional contrasts between El Niño and La Niña years related to changes of SST and vertical mixing.

  6. A comparative quantitative analysis of the IDEAL (iterative decomposition of water and fat with echo asymmetry and least-squares estimation) and the CHESS (chemical shift selection suppression) techniques in 3.0 T L-spine MRI

    Science.gov (United States)

    Kim, Eng-Chan; Cho, Jae-Hwan; Kim, Min-Hye; Kim, Ki-Hong; Choi, Cheon-Woong; Seok, Jong-min; Na, Kil-Ju; Han, Man-Seok

    2013-03-01

    This study was conducted on 20 patients who had undergone pedicle screw fixation between March and December 2010 to quantitatively compare a conventional fat suppression technique, CHESS (chemical shift selection suppression), and a new technique, IDEAL (iterative decomposition of water and fat with echo asymmetry and least squares estimation). The general efficacy and usefulness of the IDEAL technique was also evaluated. Fat-suppressed transverse-relaxation-weighed images and longitudinal-relaxation-weighted images were obtained before and after contrast injection by using these two techniques with a 1.5T MR (magnetic resonance) scanner. The obtained images were analyzed for image distortion, susceptibility artifacts and homogenous fat removal in the target region. The results showed that the image distortion due to the susceptibility artifacts caused by implanted metal was lower in the images obtained using the IDEAL technique compared to those obtained using the CHESS technique. The results of a qualitative analysis also showed that compared to the CHESS technique, fewer susceptibility artifacts and more homogenous fat removal were found in the images obtained using the IDEAL technique in a comparative image evaluation of the axial plane images before and after contrast injection. In summary, compared to the CHESS technique, the IDEAL technique showed a lower occurrence of susceptibility artifacts caused by metal and lower image distortion. In addition, more homogenous fat removal was shown in the IDEAL technique.

  7. A comparative analysis of spectral exponent estimation techniques for 1/fβ processes with applications to the analysis of stride interval time series

    Science.gov (United States)

    Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin

    2013-01-01

    Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509

  8. Estimation of Tissue Distribution of mRNA Transcripts for Desaturase and Elongase Enzymes in Channa striata (Bloch, 1793 Fingerlings using PCR Technique

    Directory of Open Access Journals (Sweden)

    M. Aliyu-Paiko

    2013-06-01

    Full Text Available Fish species are varied in their capacity to biosynthesize n-3 highlyunsaturated fatty acids (HUFA such as eicosapentaenoic and docosahexaenoic acids (EPA & DHA that are crucial to the health and well-being of all higher vertebrates. Experts report that HUFA metabolism involves enzyme-mediated fatty acyl desaturation (FAD and elongation (FAE processes. In previous studies, different workers cloned, characterized, identified and reported several genes for FAD and FAE enzymes in different fish species such as Atlantic salmon, gilthead seabream, rainbow trout and zebrafish, and also demonstrated the up- and down-regulation in the activity of these enzymes in response to fluctuations in dietary HUFA. In this paper, we report on the expression of genes (mRNA transcripts for FAD and FAE enzymes in different tissues of Channa striata (Bloch, 1793 fingerling, to evaluate the tissues of the fish in which activity of both enzymes are high. To achieve this objective, we used conventional polymerase chain reaction (PCR technique to isolate and quantify the absolute copy number for each gene transcripts from 8 different tissues of the fish (reared with a commercial feed. Our estimate show that the distribution of the 2 enzyme transcripts were significantly (P < 0.05 higher in the liver and brain of C. striata than detected in the 6 other tissues evaluated (muscle, ovary, testis, intestine, kidney and skin. Subsequently, we discuss here extensively, the implication of this observation with respect to the use of vegetable oils (VO as substitute to fish oil (FO in diets for freshwater fish species.

  9. Identification of variables for site calibration and power curve assessment in complex terrain. Task 8, a literature survey on theory and practice of parameter identification, specification and estimation (ISE) techniques

    Energy Technology Data Exchange (ETDEWEB)

    Verhoef, J.P.; Leendertse, G.P. [ECN Wind, Petten (Netherlands)

    2001-04-01

    This document presents the literature survey results on Identification, Specification and Estimation (ISE) techniques for variables within the SiteParIden project. Besides an overview of the different general techniques also an overview is given on EU funded wind energy projects where some of these techniques have been applied more specifically. The main problem in applications like power performance assessment and site calibration is to establish an appropriate model for predicting the considered dependent variable with the aid of measured independent (explanatory) variables. In these applications detailed knowledge on what the relevant variables are and how their precise appearance in the model would be is typically missing. Therefore, the identification (of variables) and the specification (of the model relation) are important steps in the model building phase. For the determination of the parameters in the model a reliable variable estimation technique is required. In EU funded wind energy projects the linear regression technique is the most commonly applied tool for the estimation step. The linear regression technique may fail in finding reliable parameter estimates when the model variables are strongly correlated, either due to the experimental set-up or because of their particular appearance in the model. This situation of multicollinearity sometimes results in unrealistic parameter values, e.g. with the wrong algebraic sign. It is concluded that different approaches, like multi-binning can provide a better way of identifying the relevant variables. However further research in these applications is needed and it is recommended that alternative methods (neural networks, singular value decomposition etc.) should also be tested on their usefulness in a succeeding project. Increased interest in complex terrains, as feasible locations for wind farms, has also emphasised the need for adequate models. A common standard procedure to prescribe the statistical

  10. Estimation of the True Digestibility of Rumen Undegraded Dietary Protein in the Small Intestine of Ruminants by the Mobile Bag Technique

    DEFF Research Database (Denmark)

    Hvelplund, Torben; Weisbjerg, Martin Riis; Andersen, L. S.

    1992-01-01

    Dietary protein degraded to various extents by varying the time of rumen incubation was prepared from eight concentrates and four roughages. Intestinal digestibility was obtained using the mobile bag technique on intact protein and on the samples of undegraded dietary protein from each feed. The ...

  11. Aerial population estimates of wild horses (Equus caballus) in the adobe town and salt wells creek herd management areas using an integrated simultaneous double-count and sightability bias correction technique

    Science.gov (United States)

    Lubow, Bruce C.; Ransom, Jason I.

    2007-01-01

    An aerial survey technique combining simultaneous double-count and sightability bias correction methodologies was used to estimate the population of wild horses inhabiting Adobe Town and Salt Wells Creek Herd Management Areas, Wyoming. Based on 5 surveys over 4 years, we conclude that the technique produced estimates consistent with the known number of horses removed between surveys and an annual population growth rate of 16.2 percent per year. Therefore, evidence from this series of surveys supports the validity of this survey method. Our results also indicate that the ability of aerial observers to see horse groups is very strongly dependent on skill of the individual observer, size of the horse group, and vegetation cover. It is also more modestly dependent on the ruggedness of the terrain and the position of the sun relative to the observer. We further conclude that censuses, or uncorrected raw counts, are inadequate estimates of population size for this herd. Such uncorrected counts were all undercounts in our trials, and varied in magnitude from year to year and observer to observer. As of April 2007, we estimate that the population of the Adobe Town /Salt Wells Creek complex is 906 horses with a 95 percent confidence interval ranging from 857 to 981 horses.

  12. Sampling and estimation techniques for the implementation of new classification systems: the change-over from NACE Rev. 1.1 to NACE Rev. 2 in business surveys

    Directory of Open Access Journals (Sweden)

    Jan van den Brakel

    2010-09-01

    Full Text Available This paper describes some of the methodological problems encountered with the change-over from the NACE Rev. 1.1 to the NACE Rev. 2 in business statistics. Different sampling and estimation strategies are proposed to produce reliable figures for the domains under both classifications simultaneously. Furthermore several methods are described that can be used to reconstruct time series for the domains under the NACE Rev. 2.

  13. A combined ANN-GA and experimental based technique for the estimation of the unknown heat flux for a conjugate heat transfer problem

    Science.gov (United States)

    M K, Harsha Kumar; P S, Vishweshwara; N, Gnanasekaran; C, Balaji

    2018-05-01

    The major objectives in the design of thermal systems are obtaining the information about thermophysical, transport and boundary properties. The main purpose of this paper is to estimate the unknown heat flux at the surface of a solid body. A constant area mild steel fin is considered and the base is subjected to constant heat flux. During heating, natural convection heat transfer occurs from the fin to ambient. The direct solution, which is the forward problem, is developed as a conjugate heat transfer problem from the fin and the steady state temperature distribution is recorded for any assumed heat flux. In order to model the natural convection heat transfer from the fin, an extended domain is created near the fin geometry and air is specified as a fluid medium and Navier Stokes equation is solved by incorporating the Boussinesq approximation. The computational time involved in executing the forward model is then reduced by developing a neural network (NN) between heat flux values and temperatures based on back propagation algorithm. The conjugate heat transfer NN model is now coupled with Genetic algorithm (GA) for the solution of the inverse problem. Initially, GA is applied to the pure surrogate data, the results are then used as input to the Levenberg- Marquardt method and such hybridization is proven to result in accurate estimation of the unknown heat flux. The hybrid method is then applied for the experimental temperature to estimate the unknown heat flux. A satisfactory agreement between the estimated and actual heat flux is achieved by incorporating the hybrid method.

  14. SU-F-T-314: Estimation of Dose Distributions with Different Types of Breast Implants in Various Radiation Treatment Techniques for Breast Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Lee, M; Lee, S; Suh, T [Department of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Jung, J [Department of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Department of Radiation Oncology, College of Medicine, Soonchunhyang University Bucheon Hospital, Bucheon (Korea, Republic of); Kim, S; Cho, Y; Lee, I [Department of Radiation Oncology, Gangnam Severance Hospital, Seoul (Korea, Republic of)

    2016-06-15

    Purpose: This study investigates the effects of different kinds and designs of commercialized breast implants on the dose distributions in breast cancer radiotherapy under a variety of conditions. Methods: The dose for the clinical conventional tangential irradiation, Intensity Modulated Radiation Therapy (IMRT), volumetric modulated arc therapy (VMAT) breast plans was measured using radiochromic films and stimulated luminescence dosimeter (OSLD). The radiochromic film was used as an integrating dosimeter, while the OSLDs were used for real-time dosimetry to isolate the contribution of dose from individual segment. The films were placed at various slices in the Rando phantom and between the body and breast surface OSLDs were used to measure skin dose at 18 positions spaced on the two (right/left) breast. The implant breast was placed on the left side and the phantom breast was remained on the right side. Each treatment technique was performed on different size of the breasts and different shape of the breast implant. The PTV dose was prescribed 50.4 Gy and V47.88≥95%. Results: In different shapes of the breast implant, because of the shadow formed extensive around the breast implant, dose variation was relatively higher that of prescribed dose. As the PTV was delineated on the whole breast, maximum 5% dose error and average 3% difference was observed averagely. VMAT techniques largely decrease the contiguous hot spot in the skin by an average of 25% compared with IMRT. The both IMRT and VMAT techniques resulted in lower doses to normal critical structures than tangential plans for nearly all dose analyzation. Conclusion: Compared to the other technique, IMRT reduced radiation dose exposure to normal tissues and maintained reasonable target homogeneity and for the same target coverage, VMAT can reduce the skin dose in all the regions of the body.

  15. SU-F-T-314: Estimation of Dose Distributions with Different Types of Breast Implants in Various Radiation Treatment Techniques for Breast Cancer

    International Nuclear Information System (INIS)

    Lee, M; Lee, S; Suh, T; Jung, J; Kim, S; Cho, Y; Lee, I

    2016-01-01

    Purpose: This study investigates the effects of different kinds and designs of commercialized breast implants on the dose distributions in breast cancer radiotherapy under a variety of conditions. Methods: The dose for the clinical conventional tangential irradiation, Intensity Modulated Radiation Therapy (IMRT), volumetric modulated arc therapy (VMAT) breast plans was measured using radiochromic films and stimulated luminescence dosimeter (OSLD). The radiochromic film was used as an integrating dosimeter, while the OSLDs were used for real-time dosimetry to isolate the contribution of dose from individual segment. The films were placed at various slices in the Rando phantom and between the body and breast surface OSLDs were used to measure skin dose at 18 positions spaced on the two (right/left) breast. The implant breast was placed on the left side and the phantom breast was remained on the right side. Each treatment technique was performed on different size of the breasts and different shape of the breast implant. The PTV dose was prescribed 50.4 Gy and V47.88≥95%. Results: In different shapes of the breast implant, because of the shadow formed extensive around the breast implant, dose variation was relatively higher that of prescribed dose. As the PTV was delineated on the whole breast, maximum 5% dose error and average 3% difference was observed averagely. VMAT techniques largely decrease the contiguous hot spot in the skin by an average of 25% compared with IMRT. The both IMRT and VMAT techniques resulted in lower doses to normal critical structures than tangential plans for nearly all dose analyzation. Conclusion: Compared to the other technique, IMRT reduced radiation dose exposure to normal tissues and maintained reasonable target homogeneity and for the same target coverage, VMAT can reduce the skin dose in all the regions of the body.

  16. A novel use of the caesium-137 technique to estimate human interference and historical water level in a Mediterranean Temporary Pond

    OpenAIRE

    Foteinis, S.; Mpizoura, K.; Panagopoulos, G.; Chatzisymeon, E.; Kallithrakas-Kontos, N.; Manutsoglu, E.

    2014-01-01

    The sustainability of, and the effects of human pressures on, Omalos Mediterranean Temporary Pond (MTP), Chanea, Greece was assessed. The Cs technique was used to identify alleged anthropogenic interference (excavation) in the studied area. It was found that about one third of the ponds bed surface material had been removed and disposed of on the northeast edge, confirming unplanned excavations that took place in the MTP area some years ago. Nonetheless, five years after the excavation, the M...

  17. Simultaneous and individual quantitative estimation of Salmonella, Shigella and Listeria monocytogenes on inoculated Roma tomatoes (Lycopersicon esculentum var. Pyriforme) and Serrano peppers (Capsicum annuum) using an MPN technique.

    Science.gov (United States)

    Cabrera-Díaz, E; Martínez-Chávez, L; Sánchez-Camarena, J; Muñiz-Flores, J A; Castillo, A; Gutiérrez-González, P; Arvizu-Medrano, S M; González-Aguilar, D G; Martínez-Gonzáles, N E

    2018-08-01

    Simultaneous and individual enumeration of Salmonella, Shigella and Listeria monocytogenes was compared on inoculated Roma tomatoes and Serrano peppers using an Most Probable Number (MPN) technique. Samples consisting of tomatoes (4 units) or peppers (8 units) were individually inoculated with a cocktail of three strains of Salmonella, Shigella or L. monocytogenes, or by simultaneous inoculation of three strains of each pathogen, at low (1.2-1.7 log CFU/sample) and high (2.2-2.7 log CFU/sample) inocula. Samples were analyzed by an MPN technique using universal pre-enrichment (UP) broth at 35 °C for 24 ± 2 h. The UP tubes from each MPN series were transferred to enrichment and plating media following adequate conventional methods for isolating each pathogen. Data were analyzed using multifactorial analysis of variance (p  simultaneous), type of bacteria (Salmonella > Shigella and L. monocytogenes), type of sample (UP broth > pepper and tomato), and inoculum level (high > low). The MPN technique was effective for Salmonella on both commodities. Shigella counts were higher on tomatoes compared to peppers, (p < 0.05), and for L. monocytogenes on peppers (p < 0.05). Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Local heat transfer estimation in microchannels during convective boiling under microgravity conditions: 3D inverse heat conduction problem using BEM techniques

    Science.gov (United States)

    Luciani, S.; LeNiliot, C.

    2008-11-01

    Two-phase and boiling flow instabilities are complex, due to phase change and the existence of several interfaces. To fully understand the high heat transfer potential of boiling flows in microscale's geometry, it is vital to quantify these transfers. To perform this task, an experimental device has been designed to observe flow patterns. Analysis is made up by using an inverse method which allows us to estimate the local heat transfers while boiling occurs inside a microchannel. In our configuration, the direct measurement would impair the accuracy of the searched heat transfer coefficient because thermocouples implanted on the surface minichannels would disturb the established flow. In this communication, we are solving a 3D IHCP which consists in estimating using experimental data measurements the surface temperature and the surface heat flux in a minichannel during convective boiling under several gravity levels (g, 1g, 1.8g). The considered IHCP is formulated as a mathematical optimization problem and solved using the boundary element method (BEM).

  19. Estimation of Sea Level variations with GPS/GLONASS-Reflectometry Technique: Case Study at Stationary Oceanographic Platform in the Black Sea

    Science.gov (United States)

    Kurbatov, G. A.; Padokhin, A. M.

    2017-12-01

    In the present work we study GNSS - reflectometry methods for estimation of sea level variations using a single GNSS-receiver, which are based on the multipath propagation effects (interference pattern in SNR of GNSS signals at small elevation angles) caused by the reflection of navigational signals from the sea surface. The measurements were carried out in the coastal zone of Black Sea at the Stationary Oceanographic Platform during one-week campaign in the summer 2017. GPS/GLONASS signals at two working frequencies of both systems were used to study sea level variations which almost doubled the amount of observations compared to GPS-only tide gauge. Moreover all the measurements were conducted with 4-antenna GNSS receiver providing the opportunity for different orientations of antennas including zenith and nadir looking ones as well as two horizontally oriented ones at different azimuths. As the reference we used data from co-located wire wave gauge which showed good correspondence of both datasets. Though tidal effects are not so pronounced for the Black Sea, the described experimental setup allowed to study the effects of sea surface roughness, driven by meteorological conditions (e.g. wind waves), as well as antenna directivity pattern effects on the observed interference patterns of GPS/GLONASS L1/L2 signals (relation of the main spectral peak to the noise power) and the quality of sea level estimations.

  20. An estimated potentiometric surface of the Death Valley region, Nevada and California, developed using geographic information system and automated interpolation techniques

    International Nuclear Information System (INIS)

    D'Agnese, F.A.; Faunt, C.C.; Turner, A.K.

    1998-01-01

    An estimated potentiometric surface was constructed for the Death Valley region, Nevada and California, from numerous, disparate data sets. The potentiometric surface was required for conceptualization of the ground-water flow system and for construction of a numerical model to aid in the regional characterization for the Yucca Mountain repository. Because accurate, manual extrapolation of potentiometric levels over large distances is difficult, a geographic-information-system method was developed to incorporate available data and apply hydrogeologic rules during contour construction. Altitudes of lakes, springs, and wetlands, interpreted as areas where the potentiometric surface intercepts the land surface, were combined with water levels from well data. Because interpreted ground-water recharge and discharge areas commonly coincide with groundwater basin boundaries, these areas also were used to constrain a gridding algorithm and to appropriately place local maxima and minima in the potentiometric-surface map. The resulting initial potentiometric surface was examined to define areas where the algorithm incorrectly extrapolated the potentiometric surface above the land surface. A map of low-permeability rocks overlaid on the potentiometric surface also indicated areas that required editing based on hydrogeologic reasoning. An interactive editor was used to adjust generated contours to better represent the natural water table conditions, such as large hydraulic gradients and troughs, or ''vees''. The resulting estimated potentiometric-surface map agreed well with previously constructed maps. Potentiometric-surface characteristics including potentiometric-surface mounds and depressions, surface troughs, and large hydraulic gradients were described

  1. Estimating the accuracy of the technique of reconstructing the rotational motion of a satellite based on the measurements of its angular velocity and the magnetic field of the Earth

    Science.gov (United States)

    Belyaev, M. Yu.; Volkov, O. N.; Monakhov, M. I.; Sazonov, V. V.

    2017-09-01

    The paper has studied the accuracy of the technique that allows the rotational motion of the Earth artificial satellites (AES) to be reconstructed based on the data of onboard measurements of angular velocity vectors and the strength of the Earth magnetic field (EMF). The technique is based on kinematic equations of the rotational motion of a rigid body. Both types of measurement data collected over some time interval have been processed jointly. The angular velocity measurements have been approximated using convenient formulas, which are substituted into the kinematic differential equations for the quaternion that specifies the transition from the body-fixed coordinate system of a satellite to the inertial coordinate system. Thus obtained equations represent a kinematic model of the rotational motion of a satellite. The solution of these equations, which approximate real motion, has been found by the least-square method from the condition of best fitting between the data of measurements of the EMF strength vector and its calculated values. The accuracy of the technique has been estimated by processing the data obtained from the board of the service module of the International Space Station ( ISS). The reconstruction of station motion using the aforementioned technique has been compared with the telemetry data on the actual motion of the station. The technique has allowed us to reconstruct the station motion in the orbital orientation mode with a maximum error less than 0.6° and the turns with a maximal error of less than 1.2°.

  2. Fast estimation of first-order scattering in a medical x-ray computed tomography scanner using a ray-tracing technique.

    Science.gov (United States)

    Liu, Xin

    2014-01-01

    This study describes a deterministic method for simulating the first-order scattering in a medical computed tomography scanner. The method was developed based on a physics model of x-ray photon interactions with matter and a ray tracing technique. The results from simulated scattering were compared to the ones from an actual scattering measurement. Two phantoms with homogeneous and heterogeneous material distributions were used in the scattering simulation and measurement. It was found that the simulated scatter profile was in agreement with the measurement result, with an average difference of 25% or less. Finally, tomographic images with artifacts caused by scatter were corrected based on the simulated scatter profiles. The image quality improved significantly.

  3. Adaptive Spectral Doppler Estimation

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2009-01-01

    . The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to pro- vide good spectral resolution and contrast even when the ob- servation window is very short. The 2 adaptive techniques are tested......In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...... and compared with the averaged periodogram (Welch’s method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set...

  4. Applying tracer techniques to NPP liquid effluents for estimating the maximum concentration of soluble pollutants in a man-made canal

    International Nuclear Information System (INIS)

    Varlam, Carmen; Stefanescu, Ioan; Varlam, Mihai; Raceanu, Mircea; Enache, Adrian; Faurescu, Ionut; Patrascu, Vasile; Bucur, Cristina

    2006-01-01

    Full text: The possibility of a contamination agent being accidentally or intentionally spilled upstream from a water supply is a constant concern to those diverting and using water from a channel. A method of rapidly estimating the travel-time or dispersion is needed for pollution control or warning system on channels where data are scarce. Travel-time and mixing of water within a stream are basic streamflow characteristics needed in order to predict the rate of movement and dilution of pollutants that could be introduced in the stream. In this study we propose using tritiated liquid effluents from CANDU type nuclear power plant as a tracer, to study hydrodynamics on Danube-Black Sea Canal. This canal is ideal for this kind of study, because wastewater evacuations occur occasionally due to technical operations of nuclear power plant. Tritiated water can be used to simulate the transport and dispersion of solutes in Danube-Black Sea Canal because they have the same physical characteristics as the water. Measured tracer-response curves produced from injection of a known amount of soluble tracer provide an efficient method of obtaining the necessary data. This method can estimate: (1) the rate of movement of a solute through the canal reach: (2) the rate of peak attenuation concentration of a conservative solute in time; and (3) the length of time required for the solute plume to pass a point in the canal. This paper presents the mixing length calculation for particular conditions (lateral branch of the canal, and lateral injection of wastewater from the nuclear power plant). A study of published experimentally-obtained formulas was used to determine proper mixing length. Simultaneous measurements in different locations of the canal confirm the beginning of the experiment. Another result used in a further experiment concerns the tritium level along the Danube-Black Sea Canal. We measured tritium activity concentration in water sampled along the Canal between July

  5. Estimation of Soil loss by USLE Model using GIS and Remote Sensing techniques: A case study of Muhuri River Basin, Tripura, India

    Directory of Open Access Journals (Sweden)

    Amit Bera

    2017-07-01

    Full Text Available Soil erosion is a most severe environmental problem in humid sub-tropical hilly state Tripura. The present study is carried out on Muhuri river basin of Tripura state, North east India having an area of 614.54 Sq.km. In this paper, Universal Soil Loss Equation (USLE model, with Geographic Information System (GIS and Remote Sensing (RS have been used to quantify the soil loss in the Muhuri river basin. Five essential parameters such as Runoff-rainfall erosivity factor (R, soil erodibility Factor (K, slope length and steepness (LS, cropping management factor (C, and support practice factor (P have been used to estimate soil loss amount in the study area. All of these layers have been prepared in GIS and RS platform (Mainly Arc GIS 10.1 using various data sources and data preparation methods. In these study DEM and LISS satellite data have been used. The daily rainfall data (2001-2010 of 6 rain gauge stations have been used to predict the R factor. Soil erodibility (K factor in Basin area ranged from 0.15 to 0.36. The spatial distribution map of soil loss of Muhuri river basin has been generated and classified into six categories according to intensity level of soil loss. The average annual predicted soil loss ranges between 0 to and 650 t/ha/y. Low soil loss areas (70 t/ha/y of soil erosion was found along the main course of Muhuri River.

  6. Empirical Methods for Detecting Regional Trends and Other Spatial Expressions in Antrim Shale Gas Productivity, with Implications for Improving Resource Projections Using Local Nonparametric Estimation Techniques

    Science.gov (United States)

    Coburn, T.C.; Freeman, P.A.; Attanasi, E.D.

    2012-01-01

    The primary objectives of this research were to (1) investigate empirical methods for establishing regional trends in unconventional gas resources as exhibited by historical production data and (2) determine whether or not incorporating additional knowledge of a regional trend in a suite of previously established local nonparametric resource prediction algorithms influences assessment results. Three different trend detection methods were applied to publicly available production data (well EUR aggregated to 80-acre cells) from the Devonian Antrim Shale gas play in the Michigan Basin. This effort led to the identification of a southeast-northwest trend in cell EUR values across the play that, in a very general sense, conforms to the primary fracture and structural orientations of the province. However, including this trend in the resource prediction algorithms did not lead to improved results. Further analysis indicated the existence of clustering among cell EUR values that likely dampens the contribution of the regional trend. The reason for the clustering, a somewhat unexpected result, is not completely understood, although the geological literature provides some possible explanations. With appropriate data, a better understanding of this clustering phenomenon may lead to important information about the factors and their interactions that control Antrim Shale gas production, which may, in turn, help establish a more general protocol for better estimating resources in this and other shale gas plays. ?? 2011 International Association for Mathematical Geology (outside the USA).

  7. The use of the 15N isotope dilution technique to estimate the contribution of associated biological nitrogen fixation to the nitrogen nutrition of Paspalum notatum cv. batatais

    International Nuclear Information System (INIS)

    Boddey, R.M.; Doebereiner, Johanna

    1983-01-01

    This paper reports the results of a field experiment to investigate the use of the 15 N-dilution technique to measure the contribution of biological N 2 fixation to the N nutrition of the batatais cultivar of Paspalum notatum. The pensacola cultivar of this grass supports little associated N 2 fixation as evidenced by the low associated C 2 H 2 reduction activity and was thus used as a nonfixing control plant. The grasses were grown in 60-cm diameter concrete cylinders sunk into the soil, and the effects of four different addition rates of labelled nitrogen (NH 4 ) 2 SO 4 , were investigated. The data from seven harvests clearly demonstrated that there was a significant input of plant associated N 2 fixation to the nutrition of the batatais cultivar amounting to approximately 20 kg N ha -1 year -1 . Problems associated with the conduct of such isotope dilution experiments are discussed including the importance of using nonfixing control plants of similar growth habit, the advantages and disadvantages of growing the plants in cylinders as opposed to field plots, and the various methods of application of labelled N fertilizer

  8. Revising the retrieval technique of a long-term stratospheric HNO{sub 3} data set. From a constrained matrix inversion to the optimal estimation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Fiorucci, I.; Muscari, G. [Istituto Nazionale di Geofisica e Vulcanologia, Rome (Italy); De Zafra, R.L. [State Univ. of New York, Stony Brook, NY (United States). Dept. of Physics and Astronomy

    2011-07-01

    The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O{sub 3}, HNO{sub 3}, CO and N{sub 2}O at polar and mid-latitudes. Its HNO{sub 3} data set shed light on HNO{sub 3} annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5 N, 68.8 W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO{sub 3} data sets from 1993 South Pole observations to date, in order to produce HNO{sub 3} version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100{+-}20% from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1{sigma} uncertainty on HNO{sub 3} v2 mixing ratio vertical profiles depends on altitude and is estimated at {proportional_to}15% or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO{sub 3} vertical profiles

  9. Estimates of evapotranspiration for riparian sites (Eucalyptus) in the Lower Murray -Darling Basin using ground validated sap flow and vegetation index scaling techniques

    Science.gov (United States)

    Doody, T.; Nagler, P. L.; Glenn, E. P.

    2014-12-01

    Water accounting is becoming critical globally, and balancing consumptive water demands with environmental water requirements is especially difficult in in arid and semi-arid regions. Within the Murray-Darling Basin (MDB) in Australia, riparian water use has not been assessed across broad scales. This study therefore aimed to apply and validate an existing U.S. riparian ecosystem evapotranspiration (ET) algorithm for the MDB river systems to assist water resource managers to quantify environmental water needs over wide ranges of niche conditions. Ground-based sap flow ET was correlated with remotely sensed predictions of ET, to provide a method to scale annual rates of water consumption by riparian vegetation over entire irrigation districts. Sap flux was measured at nine locations on the Murrumbidgee River between July 2011 and June 2012. Remotely sensed ET was calculated using a combination of local meteorological estimates of potential ET (ETo) and rainfall and MODIS Enhanced Vegetation Index (EVI) from selected 250 m resolution pixels. The sap flow data correlated well with MODIS EVI. Sap flow ranged from 0.81 mm/day to 3.60 mm/day and corresponded to a MODIS-based ET range of 1.43 mm/day to 2.42 mm/day. We found that mean ET across sites could be predicted by EVI-ETo methods with a standard error of about 20% across sites, but that ET at any given site could vary much more due to differences in aquifer and soil properties among sites. Water use was within range of that expected. We conclude that our algorithm developed for US arid land crops and riparian plants is applicable to this region of Australia. Future work includes the development of an adjusted algorithm using these sap flow validated results.

  10. Developpement de techniques numeriques pour l'estimation, la modelisation et la prediction de proprietes thermodynamiques et structurales de systems metalliques a fort ordonnancement chimique

    Science.gov (United States)

    Harvey, Jean-Philippe

    In this work, the possibility to calculate and evaluate with a high degree of precision the Gibbs energy of complex multiphase equilibria for which chemical ordering is explicitly and simultaneously considered in the thermodynamic description of solid (short range order and long range order) and liquid (short range order) metallic phases is studied. The cluster site approximation (CSA) and the cluster variation method (CVM) are implemented in a new minimization technique of the Gibbs energy of multicomponent and multiphase systems to describe the thermodynamic behaviour of metallic solid solutions showing strong chemical ordering. The modified quasichemical model in the pair approximation (MQMPA) is also implemented in the new minimization algorithm presented in this work to describe the thermodynamic behaviour of metallic liquid solutions. The constrained minimization technique implemented in this work consists of a sequential quadratic programming technique based on an exact Newton’s method (i.e. the use of exact second derivatives in the determination of the Hessian of the objective function) combined to a line search method to identify a direction of sufficient decrease of the merit function. The implementation of a new algorithm to perform the constrained minimization of the Gibbs energy is justified by the difficulty to identify, in specific cases, the correct multiphase assemblage of a system where the thermodynamic behaviour of the equilibrium phases is described by one of the previously quoted models using the FactSage software (ex.: solid_CSA+liquid_MQMPA; solid1_CSA+solid2_CSA). After a rigorous validation of the constrained Gibbs energy minimization algorithm using several assessed binary and ternary systems found in the literature, the CVM and the CSA models used to describe the energetic behaviour of metallic solid solutions present in systems with key industrial applications such as the Cu-Zr and the Al-Zr systems are parameterized using fully

  11. Thermally assisted OSL application for equivalent dose estimation; comparison of multiple equivalent dose values as well as saturation levels determined by luminescence and ESR techniques for a sedimentary sample collected from a fault gouge

    Energy Technology Data Exchange (ETDEWEB)

    Şahiner, Eren, E-mail: sahiner@ankara.edu.tr; Meriç, Niyazi, E-mail: meric@ankara.edu.tr; Polymeris, George S., E-mail: gspolymeris@ankara.edu.tr

    2017-02-01

    Highlights: • Multiple equivalent dose estimations were carried out. • Additive ESR and regenerative luminescence were applied. • Preliminary SAR results employing TA-OSL signal were discussed. • Saturation levels of ESR and luminescence were investigated. • IRSL{sub 175} and SAR TA-OSL stand as very promising for large doses. - Abstract: Equivalent dose estimation (D{sub e}) constitutes the most important part of either trap-charge dating techniques or dosimetry applications. In the present work, multiple, independent equivalent dose estimation approaches were adopted, using both luminescence and ESR techniques; two different minerals were studied, namely quartz as well as feldspathic polymineral samples. The work is divided into three independent parts, depending on the type of signal employed. Firstly, different D{sub e} estimation approaches were carried out on both polymineral and contaminated quartz, using single aliquot regenerative dose protocols employing conventional OSL and IRSL signals, acquired at different temperatures. Secondly, ESR equivalent dose estimations using the additive dose procedure both at room temperature and at 90 K were discussed. Lastly, for the first time in the literature, a single aliquot regenerative protocol employing a thermally assisted OSL signal originating from Very Deep Traps was applied for natural minerals. Rejection criteria such as recycling and recovery ratios are also presented. The SAR protocol, whenever applied, provided with compatible D{sub e} estimations with great accuracy, independent on either the type of mineral or the stimulation temperature. Low temperature ESR signals resulting from Al and Ti centers indicate very large D{sub e} values due to bleaching in-ability, associated with large uncertainty values. Additionally, dose saturation of different approaches was investigated. For the signal arising from Very Deep Traps in quartz saturation is extended almost by one order of magnitude. It is

  12. Characterization of natural porous media by NMR and MRI techniques. High and low magnetic field studies for estimation of hydraulic properties

    Energy Technology Data Exchange (ETDEWEB)

    Stingaciu, Laura-Roxana

    2010-07-01

    The aim of this thesis is to apply different NMR techniques for: i) understanding the relaxometric properties of unsaturated natural porous media and ii) for a reliable quantification of water content and its spatial and temporal change in model porous media and soil cores. For that purpose, porous media with increasing complexity and heterogeneity were used (coarse and fine sand and different mixture of sand/clay) to determine the relaxation parameters in order to adapt optimal sequence and parameters for water imaging. Conventional imaging is mostly performed with superconducting high field scanners but low field scanners promise longer relaxation times and therefore smaller loss of signal from water in small and partially filled pores. By this reason high and low field NMR experiments were conducted on these porous media to characterize the dependence on the magnetic field strength. Correlations of the NMR experiments with classical soil physics method like mercury intrusion porosimetry; water retention curves (pF) and multi-step-outflow (MSO) were performed for the characterization of the hydraulic properties of the materials. Due to the extensive research the experiments have been structured in three major parts as follows. In the first part a comparison study between relaxation experiments in high and low magnetic field was performed in order to observe the influence of the magnetic field on the relaxation properties. Due to these results, in the second part of the study only low field relaxation experiments were used in the attempt of correlations with classical soil physics methods (mercury intrusion porosimetry and water retention curves) for characterizing the hydraulic behavior of the samples. Further, the aim was to combine also MRI experiments (2D and 3D NMR) with classical soil physics methods (multi-step-outflow, MSO) for the same purpose of investigating the hydraulic properties. Because low field MRI systems are still under developing for the

  13. Characterization of natural porous media by NMR and MRI techniques. High and low magnetic field studies for estimation of hydraulic properties

    International Nuclear Information System (INIS)

    Stingaciu, Laura-Roxana

    2010-01-01

    The aim of this thesis is to apply different NMR techniques for: i) understanding the relaxometric properties of unsaturated natural porous media and ii) for a reliable quantification of water content and its spatial and temporal change in model porous media and soil cores. For that purpose, porous media with increasing complexity and heterogeneity were used (coarse and fine sand and different mixture of sand/clay) to determine the relaxation parameters in order to adapt optimal sequence and parameters for water imaging. Conventional imaging is mostly performed with superconducting high field scanners but low field scanners promise longer relaxation times and therefore smaller loss of signal from water in small and partially filled pores. By this reason high and low field NMR experiments were conducted on these porous media to characterize the dependence on the magnetic field strength. Correlations of the NMR experiments with classical soil physics method like mercury intrusion porosimetry; water retention curves (pF) and multi-step-outflow (MSO) were performed for the characterization of the hydraulic properties of the materials. Due to the extensive research the experiments have been structured in three major parts as follows. In the first part a comparison study between relaxation experiments in high and low magnetic field was performed in order to observe the influence of the magnetic field on the relaxation properties. Due to these results, in the second part of the study only low field relaxation experiments were used in the attempt of correlations with classical soil physics methods (mercury intrusion porosimetry and water retention curves) for characterizing the hydraulic behavior of the samples. Further, the aim was to combine also MRI experiments (2D and 3D NMR) with classical soil physics methods (multi-step-outflow, MSO) for the same purpose of investigating the hydraulic properties. Because low field MRI systems are still under developing for the

  14. Estimation of Cs-137 hillslope patterns of Polesje landscapes using geo-information modeling techniques (on example of the Bryansk region)

    Science.gov (United States)

    Linnik, Vitaly; Nenko, Kristina; Sokolov, Alexander; Saveliev, Anatoly

    2015-04-01

    In the result of Chernobyl disaster on 26 April 1986 many regions of Ukraine, Belarus and Russia were contaminated by radionuclides. Vast areas of farmlands and woodlands were contaminated in Russia. The deposited radionuclides continue to cause concern about the possible contamination of food (in particular, mushrooms and berries). But the radioactive materials are also an ideal marker for understanding of hillslope processes in natural and seminatural landscapes. Model area chosen for the research (Opolje landscapes located in the central part of the Bryansk region) is characterized by relatively low levels of Cs-137 contamination. It just 4-33 times higher than global fallout which was equal 1,75 kBq/m2 in 1986 . According the results of air gamma survey (grid size: 100 m x100 m), which was done in 1993, it was explicitly to identify that the processes of Cs-137 lateral migration took place due to nearly fourfold increase of Cs-137 in the lower slope in comparison with the surface of the watershed during a seven-year period after Chernobyl accident. Erosion processes which define Cs-137 pattern in the lowest part of hillslope depend upon such parameters as slope, hillslope forms, vegetation, land use and the roads, which intersect a streamline. GIS-modeling of Cs-137 was carried out in SAGA software. The spatial modeling resolution was equal 100x100 m according the air-gamma data. SRTM data was resampled to a grid 100x100 m. Erosion rates were the highest on the slope of southern exposure. There the processes of lateral migration are more intensive and observed within the entire slope. The main contribution in receipt of Cs-137 to floodplain on the northern slopes comes only from the lower part of the slope and gullies and ravines network. We have used geo-information modeling techniques and some kind of interpolation and statistical models to predict or understand forming of Cs-137 spatial patterns and trends in soil erosion. To study the role of some

  15. AIRBORNE LIGHT DETECTION AND RANGING (LIDAR DERIVED DEFORMATION FROM THE MW 6.0 24 AUGUST, 2014 SOUTH NAPA EARTHQUAKE ESTIMATED BY TWO AND THREE DIMENSIONAL POINT CLOUD CHANGE DETECTION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    A. W. Lyda

    2016-06-01

    Full Text Available Remote sensing via LiDAR (Light Detection And Ranging has proven extremely useful in both Earth science and hazard related studies. Surveys taken before and after an earthquake for example, can provide decimeter-level, 3D near-field estimates of land deformation that offer better spatial coverage of the near field rupture zone than other geodetic methods (e.g., InSAR, GNSS, or alignment array. In this study, we compare and contrast estimates of deformation obtained from different pre and post-event airborne laser scanning (ALS data sets of the 2014 South Napa Earthquake using two change detection algorithms, Iterative Control Point (ICP and Particle Image Velocimetry (PIV. The ICP algorithm is a closest point based registration algorithm that can iteratively acquire three dimensional deformations from airborne LiDAR data sets. By employing a newly proposed partition scheme, “moving window,” to handle the large spatial scale point cloud over the earthquake rupture area, the ICP process applies a rigid registration of data sets within an overlapped window to enhance the change detection results of the local, spatially varying surface deformation near-fault. The other algorithm, PIV, is a well-established, two dimensional image co-registration and correlation technique developed in fluid mechanics research and later applied to geotechnical studies. Adapted here for an earthquake with little vertical movement, the 3D point cloud is interpolated into a 2D DTM image and horizontal deformation is determined by assessing the cross-correlation of interrogation areas within the images to find the most likely deformation between two areas. Both the PIV process and the ICP algorithm are further benefited by a presented, novel use of urban geodetic markers. Analogous to the persistent scatterer technique employed with differential radar observations, this new LiDAR application exploits a classified point cloud dataset to assist the change detection

  16. TH-EF-BRA-08: A Novel Technique for Estimating Volumetric Cine MRI (VC-MRI) From Multi-Slice Sparsely Sampled Cine Images Using Motion Modeling and Free Form Deformation

    International Nuclear Information System (INIS)

    Harris, W; Yin, F; Wang, C; Chang, Z; Cai, J; Zhang, Y; Ren, L

    2016-01-01

    Purpose: To develop a technique to estimate on-board VC-MRI using multi-slice sparsely-sampled cine images, patient prior 4D-MRI, motion-modeling and free-form deformation for real-time 3D target verification of lung radiotherapy. Methods: A previous method has been developed to generate on-board VC-MRI by deforming prior MRI images based on a motion model(MM) extracted from prior 4D-MRI and a single-slice on-board 2D-cine image. In this study, free-form deformation(FD) was introduced to correct for errors in the MM when large anatomical changes exist. Multiple-slice sparsely-sampled on-board 2D-cine images located within the target are used to improve both the estimation accuracy and temporal resolution of VC-MRI. The on-board 2D-cine MRIs are acquired at 20–30frames/s by sampling only 10% of the k-space on Cartesian grid, with 85% of that taken at the central k-space. The method was evaluated using XCAT(computerized patient model) simulation of lung cancer patients with various anatomical and respirational changes from prior 4D-MRI to onboard volume. The accuracy was evaluated using Volume-Percent-Difference(VPD) and Center-of-Mass-Shift(COMS) of the estimated tumor volume. Effects of region-of-interest(ROI) selection, 2D-cine slice orientation, slice number and slice location on the estimation accuracy were evaluated. Results: VCMRI estimated using 10 sparsely-sampled sagittal 2D-cine MRIs achieved VPD/COMS of 9.07±3.54%/0.45±0.53mm among all scenarios based on estimation with ROI_MM-ROI_FD. The FD optimization improved estimation significantly for scenarios with anatomical changes. Using ROI-FD achieved better estimation than global-FD. Changing the multi-slice orientation to axial, coronal, and axial/sagittal orthogonal reduced the accuracy of VCMRI to VPD/COMS of 19.47±15.74%/1.57±2.54mm, 20.70±9.97%/2.34±0.92mm, and 16.02±13.79%/0.60±0.82mm, respectively. Reducing the number of cines to 8 enhanced temporal resolution of VC-MRI by 25% while

  17. TH-EF-BRA-08: A Novel Technique for Estimating Volumetric Cine MRI (VC-MRI) From Multi-Slice Sparsely Sampled Cine Images Using Motion Modeling and Free Form Deformation

    Energy Technology Data Exchange (ETDEWEB)

    Harris, W; Yin, F; Wang, C; Chang, Z; Cai, J; Zhang, Y; Ren, L [Duke University Medical Center, Durham, NC (United States)

    2016-06-15

    Purpose: To develop a technique to estimate on-board VC-MRI using multi-slice sparsely-sampled cine images, patient prior 4D-MRI, motion-modeling and free-form deformation for real-time 3D target verification of lung radiotherapy. Methods: A previous method has been developed to generate on-board VC-MRI by deforming prior MRI images based on a motion model(MM) extracted from prior 4D-MRI and a single-slice on-board 2D-cine image. In this study, free-form deformation(FD) was introduced to correct for errors in the MM when large anatomical changes exist. Multiple-slice sparsely-sampled on-board 2D-cine images located within the target are used to improve both the estimation accuracy and temporal resolution of VC-MRI. The on-board 2D-cine MRIs are acquired at 20–30frames/s by sampling only 10% of the k-space on Cartesian grid, with 85% of that taken at the central k-space. The method was evaluated using XCAT(computerized patient model) simulation of lung cancer patients with various anatomical and respirational changes from prior 4D-MRI to onboard volume. The accuracy was evaluated using Volume-Percent-Difference(VPD) and Center-of-Mass-Shift(COMS) of the estimated tumor volume. Effects of region-of-interest(ROI) selection, 2D-cine slice orientation, slice number and slice location on the estimation accuracy were evaluated. Results: VCMRI estimated using 10 sparsely-sampled sagittal 2D-cine MRIs achieved VPD/COMS of 9.07±3.54%/0.45±0.53mm among all scenarios based on estimation with ROI-MM-ROI-FD. The FD optimization improved estimation significantly for scenarios with anatomical changes. Using ROI-FD achieved better estimation than global-FD. Changing the multi-slice orientation to axial, coronal, and axial/sagittal orthogonal reduced the accuracy of VCMRI to VPD/COMS of 19.47±15.74%/1.57±2.54mm, 20.70±9.97%/2.34±0.92mm, and 16.02±13.79%/0.60±0.82mm, respectively. Reducing the number of cines to 8 enhanced temporal resolution of VC-MRI by 25% while

  18. Iterative Decomposition of Water and Fat with Echo Asymmetric and Least—Squares Estimation (IDEAL (Reeder et al. 2005 Automated Spine Survey Iterative Scan Technique (ASSIST (Weiss et al. 2006

    Directory of Open Access Journals (Sweden)

    Kenneth L. Weiss

    2008-01-01

    Full Text Available Background and Purpose: Multi-parametric MRI of the entire spine is technologist-dependent, time consuming, and often limited by inhomogeneous fat suppression. We tested a technique to provide rapid automated total spine MRI screening with improved tissue contrast through optimized fat-water separation.Methods: The entire spine was auto-imaged in two contiguous 35 cm field of view (FOV sagittal stations, utilizing out-of-phase fast gradient echo (FGRE and T1 and/or T2 weighted fast spin echo (FSE IDEAL (Iterative Decomposition of Water and Fat with Echo Asymmetric and Least-squares Estimation sequences. 18 subjects were studied, one twice at 3.0T (pre and post contrast and one at both 1.5 T and 3.0T for a total of 20 spine examinations (8 at 1.5 T and 12 at 3.0T. Images were independently evaluated by two neuroradiologists and run through Automated Spine Survey Iterative Scan Technique (ASSIST analysis software for automated vertebral numbering.Results: In all 20 total spine studies, neuroradiologist and computer ASSIST labeling were concordant. In all cases, IDEAL provided uniform fat and water separation throughout the entire 70 cm FOV imaged. Two subjects demonstrated breast metastases and one had a large presumptive schwannoma. 14 subjects demonstrated degenerative disc disease with associated Modic Type I or II changes at one or more levels. FGRE ASSIST afforded subminute submillimeter in-plane resolution of the entire spine with high contrast between discs and vertebrae at both 1.5 and 3.0T. Marrow signal abnormalities could be particularly well characterized with IDEAL derived images and parametric maps.Conclusion: IDEAL ASSIST is a promising MRI technique affording a rapid automated high resolution, high contrast survey of the entire spine with optimized tissue characterization.

  19. A neural flow estimator

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik

    1995-01-01

    This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...

  20. Experimental techniques; Techniques experimentales

    Energy Technology Data Exchange (ETDEWEB)

    Roussel-Chomaz, P. [GANIL CNRS/IN2P3, CEA/DSM, 14 - Caen (France)

    2007-07-01

    This lecture presents the experimental techniques, developed in the last 10 or 15 years, in order to perform a new class of experiments with exotic nuclei, where the reactions induced by these nuclei allow to get information on their structure. A brief review of the secondary beams production methods will be given, with some examples of facilities in operation or under project. The important developments performed recently on cryogenic targets will be presented. The different detection systems will be reviewed, both the beam detectors before the targets, and the many kind of detectors necessary to detect all outgoing particles after the reaction: magnetic spectrometer for the heavy fragment, detection systems for the target recoil nucleus, {gamma} detectors. Finally, several typical examples of experiments will be detailed, in order to illustrate the use of each detector either alone, or in coincidence with others. (author)

  1. Improved Estimates of Thermodynamic Parameters

    Science.gov (United States)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  2. Development of estimation method for tephra transport and dispersal characteristics with numerical simulation technique. Part 2. A method of selecting meteorological conditions and the effects on ash deposition and concentration in air for Kanto-area

    International Nuclear Information System (INIS)

    Hattori, Yasuo; Suto, Hitoshi; Toshida, Kiyoshi; Hirakuchi, Hiromaru

    2016-01-01

    In the present study, we examine the estimation of ground deposition for a real test case, a volcanic ash hazard in Kanto-area with various meteorological conditions by using an ash transport- and deposition-model, fall3d; we consider three eruptions, which correspond to the stage 1 and 3 of Hoei eruption at Mt. Fuji and Tenmei Eruption at Mt. Asama. The meteorological conditions are generated with the 53 years reanalysis meteorological dataset, CRIEPI-RCM-Era2, which has a temporal- and spatial-resolutions of 1 hr and 5 km. The typical and extreme conditions were sampled by using Gumbel plot and an artificial neural network technique. The ash deposition is invariably limited to the west area of the vent, even with the typical wind conditions on summer, while the isopach of ground deposition depicted various distributions, which strongly depends on meteorological conditions. This implies that the concentric circular distribution must not be realistic. Also, a long-term eruption, such as the Hoei eruption during stage 3, yields large deposition area due to the daily variations of wind direction, suggesting that the attention to the differences between daily variation and fluctuations of wind direction on evaluating of volcanic ash risk is vital. (author)

  3. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el