WorldWideScience

Sample records for length estimator technique

  1. RUN LENGTH SYNCHRONIZATION TECHNIQUES

    Science.gov (United States)

    An important aspect of digital communications is the problem of determining efficient methods for acquiring block synchronization . In this paper we...utilizes an N-digit sync sequence as prefix to the data blocks. The results of this study show that this technique is a practical method for acquiring block synchronization .

  2. ESTIMATION OF STATURE BASED ON FOOT LENGTH

    Directory of Open Access Journals (Sweden)

    Vidyullatha Shetty

    2015-01-01

    Full Text Available BACKGROUND : Stature is the height of the person in the upright posture. It is an important measure of physical identity. Estimation of body height from its segments or dismember parts has important considerations for identifications of living or dead human body or remains recovered from disasters or other similar conditions. OBJECTIVE : Stature is an important indicator for identification. There are numerous means to establish stature and their significance lies in the simplicity of measurement, applicability and accuracy in prediction. Our aim of the study was to review the relationship between foot length and body height. METHODS : The present study reviews various prospective studies which were done to estimate the stature. All the measurements were taken by using standard measuring devices and standard anthropometric techniques. RESULTS : This review shows there is a correlation between stature and foot dimensions it is found to be positive and statistically highly significant. Prediction of stature was found to be most accurate by multiple regression analysis. CONCLUSIONS : Stature and gender estimation can be done by using foot measurements and stud y will help in medico - legal cases in establishing identity of an individual and this would be useful for Anatomists and Anthropologists to calculate stature based on foot length

  3. Track length estimation applied to point detectors

    International Nuclear Information System (INIS)

    Rief, H.; Dubi, A.; Elperin, T.

    1984-01-01

    The concept of the track length estimator is applied to the uncollided point flux estimator (UCF) leading to a new algorithm of calculating fluxes at a point. It consists essentially of a line integral of the UCF, and although its variance is unbounded, the convergence rate is that of a bounded variance estimator. In certain applications, involving detector points in the vicinity of collimated beam sources, it has a lower variance than the once-more-collided point flux estimator, and its application is more straightforward

  4. Blind sequence-length estimation of low-SNR cyclostationary sequences

    CSIR Research Space (South Africa)

    Vlok, JD

    2014-06-01

    Full Text Available Several existing direct-sequence spread spectrum (DSSS) detection and estimation algorithms assume prior knowledge of the symbol period or sequence length, although very few sequence-length estimation techniques are available in the literature...

  5. Estimation of ocular volume from axial length.

    Science.gov (United States)

    Nagra, Manbir; Gilmartin, Bernard; Logan, Nicola S

    2014-12-01

    To determine which biometric parameters provide optimum predictive power for ocular volume. Sixty-seven adult subjects were scanned with a Siemens 3-T MRI scanner. Mean spherical error (MSE) (D) was measured with a Shin-Nippon autorefractor and a Zeiss IOLMaster used to measure (mm) axial length (AL), anterior chamber depth (ACD) and corneal radius (CR). Total ocular volume (TOV) was calculated from T2-weighted MRIs (voxel size 1.0 mm(3)) using an automatic voxel counting and shading algorithm. Each MR slice was subsequently edited manually in the axial, sagittal and coronal plane, the latter enabling location of the posterior pole of the crystalline lens and partitioning of TOV into anterior (AV) and posterior volume (PV) regions. Mean values (±SD) for MSE (D), AL (mm), ACD (mm) and CR (mm) were -2.62±3.83, 24.51±1.47, 3.55±0.34 and 7.75±0.28, respectively. Mean values (±SD) for TOV, AV and PV (mm(3)) were 8168.21±1141.86, 1099.40±139.24 and 7068.82±1134.05, respectively. TOV showed significant correlation with MSE, AL, PV (all p<0.001), CR (p=0.043) and ACD (p=0.024). Bar CR, the correlations were shown to be wholly attributable to variation in PV. Multiple linear regression indicated that the combination of AL and CR provided optimum R(2) values of 79.4% for TOV. Clinically useful estimations of ocular volume can be obtained from measurement of AL and CR. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. CHANNEL ESTIMATION TECHNIQUE

    DEFF Research Database (Denmark)

    2015-01-01

    A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the communicat......A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over...... the communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...... filter characteristics of at least one known transceiver filter arranged in the communication channel....

  7. Correlation length estimation in a polycrystalline material model

    International Nuclear Information System (INIS)

    Simonovski, I.; Cizelj, L.

    2005-01-01

    This paper deals with the correlation length estimated from a mesoscopic model of a polycrystalline material. The correlation length can be used in some macroscopic material models as a material parameter that describes the internal length. It can be estimated directly from the strain and stress fields calculated from a finite-element model, which explicitly accounts for the selected mesoscopic features such as the random orientation, shape and size of the grains. A crystal plasticity material model was applied in the finite-element analysis. Different correlation lengths were obtained depending on the used set of crystallographic orientations. We determined that the different sets of crystallographic orientations affect the general level of the correlation length, however, as the external load is increased the behaviour of correlation length is similar in all the analyzed cases. The correlation lengths also changed with the macroscopic load. If the load is below the yield strength the correlation lengths are constant, and are slightly higher than the average grain size. The correlation length can therefore be considered as an indicator of first plastic deformations in the material. Increasing the load above the yield strength creates shear bands that temporarily increase the values of the correlation lengths calculated from the strain fields. With a further load increase the correlation lengths decrease slightly but stay above the average grain size. (author)

  8. A Small Crack Length Evaluation Technique by Electronic Scanning

    International Nuclear Information System (INIS)

    Cho, Yong Sang; Kim, Jae Hoon

    2009-01-01

    The results of crack evaluation by conventional UT(Ultrasonic Test)is highly depend on the inspector's experience or knowledge of ultrasound. Phased array UT system and its application methods for small crack length evaluation will be a good alternative method which overcome present UT weakness. This study was aimed at checking the accuracy of crack length evaluation method by electronic scanning and discuss about characteristics of electronic scanning for crack length evaluation. Especially ultrasonic phased array with electronic scan technique was used in carrying out both sizing and detect ability of crack as its length changes. The response of ultrasonic phased array was analyzed to obtain the special method of determining crack length without moving the transducer and detectability of crack minimal length and depth from the material. A method of crack length determining by electronic scanning for the small crack is very real method which has it's accuracy and verify the effectiveness of method compared to a conventional crack length determination

  9. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  10. Performance Evaluation Of Furrow Lengths And Field Application Techniques

    Directory of Open Access Journals (Sweden)

    Issaka

    2015-08-01

    Full Text Available Abstract The study evaluated performance of furrow lengths and field application techniques. The experiment was conducted on 2000 m2 field at Bontanga irrigation scheme. Randomized Complete Block Design RCBD was used with three replicates. The replicates include Blocks A B and C of furrow lengths 100 m 75 m and 50 m respectively. Each replicate had surge cut-off cut-back and bunds treatments. Water was introduced into the furrows and the advance distances and time were measured. Results of the study showed that at Block A surge technique recorded the highest advance rate of 1.26 minm and opportunity time of 11 min whilst bunds recorded the lowest advance rate of 0.92 minm. Significant difference 3.32 pamp88050.05 occurred between treatment means of field application techniques at Block A 100 m. Significant difference 2.71 pamp88050.05 was also recorded between treatment means. At Block B 75 m there was significant difference 2.71 pamp88050.05 between treatment means. No significant difference 0.14 pamp88040.05 was observed among surge cut-back and bunds techniques. There was significant difference 2.60 pamp88050.05 between treatment means but no significant difference between cut-back and bunds techniques in Block C 50 m. Their performance was ranked in the order Surge Cut-back Cut-off Bunds for furrow lengths 100 m 75 m and 50 m respectively.

  11. Sonographic fetal weight estimation using femoral length: Honarvar Equation

    International Nuclear Information System (INIS)

    Firoozabadi, Raziah Dehghani; Ghasemi, N.; Firoozabadi, Mehdi Dehghani

    2007-01-01

    Fetal growth is the result of interactions between various factors and can be estimated by ultrasonic measurements. Fetal femur length is a scale for estimating the fetal weight in individual races because fetal growth patterns differ among different races. This was a prospective study involving 500 pregnant women at 36 weeks of gestational age. Real-time sonography was done to measure the femoral length and the weight of the fetus was estimated by the Honarvar 2 equation. The correlation between estimated fetal weight (EFW) and real weight was tested by Pearson correlation coefficient and relationships with the age and BMI of mother, the sex of the neonate and parity were tested by multiple regression. EFW by the Honarvar 2 equation correlated significantly with actual birthweight. Therefore, this equation is valid for fetal weight estimation. It also does not depend on the age and BMI of the mother, sex of the neonate, parity. Ethnicity potentially plays an important role in the fetal weight estimation. The Honarvar formula produced the best estimate of the actual birthweight for Iranian fetuses, and its use is recommended. (author)

  12. Mobile Stride Length Estimation With Deep Convolutional Neural Networks.

    Science.gov (United States)

    Hannink, Julius; Kautz, Thomas; Pasluosta, Cristian F; Barth, Jens; Schulein, Samuel; GaBmann, Karl-Gunter; Klucken, Jochen; Eskofier, Bjoern M

    2018-03-01

    Accurate estimation of spatial gait characteristics is critical to assess motor impairments resulting from neurological or musculoskeletal disease. Currently, however, methodological constraints limit clinical applicability of state-of-the-art double integration approaches to gait patterns with a clear zero-velocity phase. We describe a novel approach to stride length estimation that uses deep convolutional neural networks to map stride-specific inertial sensor data to the resulting stride length. The model is trained on a publicly available and clinically relevant benchmark dataset consisting of 1220 strides from 101 geriatric patients. Evaluation is done in a tenfold cross validation and for three different stride definitions. Even though best results are achieved with strides defined from midstance to midstance with average accuracy and precision of , performance does not strongly depend on stride definition. The achieved precision outperforms state-of-the-art methods evaluated on the same benchmark dataset by . Due to the independence of stride definition, the proposed method is not subject to the methodological constrains that limit applicability of state-of-the-art double integration methods. Furthermore, it was possible to improve precision on the benchmark dataset. With more precise mobile stride length estimation, new insights to the progression of neurological disease or early indications might be gained. Due to the independence of stride definition, previously uncharted diseases in terms of mobile gait analysis can now be investigated by retraining and applying the proposed method.

  13. A simple method for estimating the length density of convoluted tubular systems.

    Science.gov (United States)

    Ferraz de Carvalho, Cláudio A; de Campos Boldrini, Silvia; Nishimaru, Flávio; Liberti, Edson A

    2008-10-01

    We present a new method for estimating the length density (Lv) of convoluted tubular structures exhibiting an isotropic distribution. Although the traditional equation Lv=2Q/A is used, the parameter Q is obtained by considering the collective perimeters of tubular sections. This measurement is converted to a standard model of the structure, assuming that all cross-sections are approximately circular and have an average perimeter similar to that of actual circular cross-sections observed in the same material. The accuracy of this method was tested in eight experiments using hollow macaroni bent into helical shapes. After measuring the length of the macaroni segments, they were boiled and randomly packed into cylindrical volumes along with an aqueous suspension of gelatin and India ink. The solidified blocks were cut into slices 1.0 cm thick and 33.2 cm2 in area (A). The total perimeter of the macaroni cross-sections so revealed was stereologically estimated using a test system of straight parallel lines. Given Lv and the reference volume, the total length of macaroni in each section could be estimated. Additional corrections were made for the changes induced by boiling, and the off-axis position of the thread used to measure length. No statistical difference was observed between the corrected estimated values and the actual lengths. This technique is useful for estimating the length of capillaries, renal tubules, and seminiferous tubules.

  14. IMPROVED ESTIMATION OF FIBER LENGTH FROM 3-DIMENSIONAL IMAGES

    Directory of Open Access Journals (Sweden)

    Joachim Ohser

    2013-03-01

    Full Text Available A new method is presented for estimating the specific fiber length from 3D images of macroscopically homogeneous fiber systems. The method is based on a discrete version of the Crofton formula, where local knowledge from 3x3x3-pixel configurations of the image data is exploited. It is shown that the relative error resulting from the discretization of the outer integral of the Crofton formula amonts at most 1.2%. An algorithmic implementation of the method is simple and the runtime as well as the amount of memory space are low. The estimation is significantly improved by considering 3x3x3-pixel configurations instead of 2x2x2, as already studied in literature.

  15. Influence of crack length on crack depth measurement by an alternating current potential drop technique

    International Nuclear Information System (INIS)

    Raja, Manoj K; Mahadevan, S; Rao, B P C; Behera, S P; Jayakumar, T; Raj, Baldev

    2010-01-01

    An alternating current potential drop (ACPD) technique is used for sizing depth of surface cracks in metallic components. Crack depth estimations are prone to large deviations when ACPD measurements are made on very shallow and finite length cracks, especially in low conducting materials such as austenitic stainless steel (SS). Detailed studies have been carried out to investigate the influence of crack length and aspect ratio (length to depth) on depth estimation by performing measurements on electric discharge machined notches with the aspect ratio in the range of 1 to 40 in SS plates. In notches with finite length, an additional path for current to flow through the surface along the length is available causing the notch depths to be underestimated. The experimentally observed deviation in notch depth estimates is explained from a simple mathematical approach using the equivalent resistive circuit model based on the additional path available for the current to flow. A scheme is proposed to accurately measure the depth of cracks with finite lengths in SS components

  16. Bayesian techniques for surface fuel loading estimation

    Science.gov (United States)

    Kathy Gray; Robert Keane; Ryan Karpisz; Alyssa Pedersen; Rick Brown; Taylor Russell

    2016-01-01

    A study by Keane and Gray (2013) compared three sampling techniques for estimating surface fine woody fuels. Known amounts of fine woody fuel were distributed on a parking lot, and researchers estimated the loadings using different sampling techniques. An important result was that precise estimates of biomass required intensive sampling for both the planar intercept...

  17. Spectral Estimation by the Random Dec Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, Jacob L.; Krenk, Steen

    1990-01-01

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  18. Spectral Estimation by the Random DEC Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, J. Laigaard; Krenk, S.

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  19. Infant bone age estimation based on fibular shaft length: model development and clinical validation

    International Nuclear Information System (INIS)

    Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K.

    2016-01-01

    Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)

  20. Infant bone age estimation based on fibular shaft length: model development and clinical validation

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K. [Boston Children' s Hospital, Harvard Medical School, Department of Radiology, Boston, MA (United States)

    2016-03-15

    Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)

  1. Topological analysis of polymeric melts: chain-length effects and fast-converging estimators for entanglement length.

    Science.gov (United States)

    Hoy, Robert S; Foteinopoulou, Katerina; Kröger, Martin

    2009-09-01

    Primitive path analyses of entanglements are performed over a wide range of chain lengths for both bead spring and atomistic polyethylene polymer melts. Estimators for the entanglement length N_{e} which operate on results for a single chain length N are shown to produce systematic O(1/N) errors. The mathematical roots of these errors are identified as (a) treating chain ends as entanglements and (b) neglecting non-Gaussian corrections to chain and primitive path dimensions. The prefactors for the O(1/N) errors may be large; in general their magnitude depends both on the polymer model and the method used to obtain primitive paths. We propose, derive, and test new estimators which eliminate these systematic errors using information obtainable from the variation in entanglement characteristics with chain length. The new estimators produce accurate results for N_{e} from marginally entangled systems. Formulas based on direct enumeration of entanglements appear to converge faster and are simpler to apply.

  2. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  3. Can anchovy age structure be estimated from length distribution ...

    African Journals Online (AJOL)

    The analysis provides a new time-series of proportions-at-age 1, together with associated standard errors, for input into assessments of the resource. The results also caution against the danger of scientists reading more information into data than is really there. Keywords: anchovy, effective sample size, length distribution, ...

  4. Estimation Issues and Generational Changes in Modeling Criminal Career Length

    Science.gov (United States)

    Francis, Brian; Soothill, Keith; Piquero, Alex R.

    2007-01-01

    This article seeks to model criminal career length using data from six different birth cohorts born between 1953 and 1978, totaling more than 58,000 males and females from England and Wales. A secondary aim of this article is to consider whether information available at the first court appearance leading to a conviction is associated with the…

  5. An Amplitude Spectral Capon Estimator with a Variable Filter Length

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Smaragdis, Paris; Christensen, Mads Græsbøll

    2012-01-01

    The filter bank methods have been a popular non-parametric way of computing the complex amplitude spectrum. So far, the length of the filters in these filter banks has been set to some constant value independently of the data. In this paper, we take the first step towards considering the filter...

  6. Hydrological Storage Length Scales Represented by Remote Sensing Estimates of Soil Moisture and Precipitation

    Science.gov (United States)

    Akbar, Ruzbeh; Short Gianotti, Daniel; McColl, Kaighin A.; Haghighi, Erfan; Salvucci, Guido D.; Entekhabi, Dara

    2018-03-01

    The soil water content profile is often well correlated with the soil moisture state near the surface. They share mutual information such that analysis of surface-only soil moisture is, at times and in conjunction with precipitation information, reflective of deeper soil fluxes and dynamics. This study examines the characteristic length scale, or effective depth Δz, of a simple active hydrological control volume. The volume is described only by precipitation inputs and soil water dynamics evident in surface-only soil moisture observations. To proceed, first an observation-based technique is presented to estimate the soil moisture loss function based on analysis of soil moisture dry-downs and its successive negative increments. Then, the length scale Δz is obtained via an optimization process wherein the root-mean-squared (RMS) differences between surface soil moisture observations and its predictions based on water balance are minimized. The process is entirely observation-driven. The surface soil moisture estimates are obtained from the NASA Soil Moisture Active Passive (SMAP) mission and precipitation from the gauge-corrected Climate Prediction Center daily global precipitation product. The length scale Δz exhibits a clear east-west gradient across the contiguous United States (CONUS), such that large Δz depths (>200 mm) are estimated in wetter regions with larger mean precipitation. The median Δz across CONUS is 135 mm. The spatial variance of Δz is predominantly explained and influenced by precipitation characteristics. Soil properties, especially texture in the form of sand fraction, as well as the mean soil moisture state have a lesser influence on the length scale.

  7. Studies on the Estimation of Stature from Hand and Foot Length of an Individual

    Directory of Open Access Journals (Sweden)

    O. S. Saka

    2016-10-01

    Full Text Available Background: Studies on the estimation of stature from hand and foot length of an individual are essential study in personal identification. Aim and Objectives: This study is to find out correlation between statures with hand and foot dimensions in both sexes and gender comparison from an individual in Lautech Staff College in Ogbomoso and College ogbomoso and College of Health Sciences, Obafemi Awolowo University, Ile-Ife, Nigeria. Material and Methods: A sample of 140 students and staff; 70 male and 70 female Students and staff of Lautech Staff College in Ogbomoso and College ogbomoso and College of Health Sciences, Obafemi Awolowo University, Ile-Ife, between 16-35years were considered and measurements were taken for each of the parameters. Gender differences for the two parameters were determined using Student t-test. Pearson's correlation coefficient (r was used to examine the relationship between two anthropometric parameters and standing height (stature. All these measurements were done by using standard anthropometric instruments and standard anthropometric techniques. Results: The findings of the study indicated that the males mean values are not significantly difference when compared with females mean values in all measured parameters. The study showed significant (p<0.001 positive correlation between the stature with hand lengths and foot lengths. The hand and foot length provide accurate and reliable means in establishing the height of an individual. Conclusion: This study will be useful for forensic scientists and anthropologists as well as anatomists in ascertain medico-legal cases

  8. Testing an Alternative Method for Estimating the Length of Fungal Hyphae Using Photomicrography and Image Processing.

    Science.gov (United States)

    Shen, Qinhua; Kirschbaum, Miko U F; Hedley, Mike J; Camps Arbestain, Marta

    2016-01-01

    This study aimed to develop and test an unbiased and rapid methodology to estimate the length of external arbuscular mycorrhizal fungal (AMF) hyphae in soil. The traditional visual gridline intersection (VGI) method, which consists in a direct visual examination of the intersections of hyphae with gridlines on a microscope eyepiece after aqueous extraction, membrane-filtration, and staining (e.g., with trypan blue), was refined. For this, (i) images of the stained hyphae were taken by using a digital photomicrography technique to avoid the use of the microscope and the method was referred to as "digital gridline intersection" (DGI) method; and (ii), the images taken in (i) were processed and the hyphal length was measured by using ImageJ software, referred to as the "photomicrography-ImageJ processing" (PIP) method. The DGI and PIP methods were tested using known grade lengths of possum fur. Then they were applied to measure the hyphal lengths in soils with contrasting phosphorus (P) fertility status. Linear regressions were obtained between the known lengths (Lknown) of possum fur and the values determined by using either the DGI (LDGI) (LDGI = 0.37 + 0.97 × Lknown, r2 = 0.86) or PIP (LPIP) methods (LPIP = 0.33 + 1.01 × Lknown, r2 = 0.98). There were no significant (P > 0.05) differences between the LDGI and LPIP values. While both methods provided accurate estimation (slope of regression being 1.0), the PIP method was more precise, as reflected by a higher value of r2 and lower coefficients of variation. The average hyphal lengths (6.5-19.4 m g-1) obtained by the use of these methods were in the range of those typically reported in the literature (3-30 m g-1). Roots growing in P-deficient soil developed 2.5 times as many hyphae as roots growing in P-rich soil (17.4 vs 7.2 m g-1). These tests confirmed that the use of digital photomicrography in conjunction with either the grid-line intersection principle or image processing is a suitable method for the

  9. Critical length sampling: a method to estimate the volume of downed coarse woody debris

    Science.gov (United States)

    G& #246; ran St& #229; hl; Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey

    2010-01-01

    In this paper, critical length sampling for estimating the volume of downed coarse woody debris is presented. Using this method, the volume of downed wood in a stand can be estimated by summing the critical lengths of down logs included in a sample obtained using a relascope or wedge prism; typically, the instrument should be tilted 90° from its usual...

  10. Development of software and modification of Q-FISH protocol for estimation of individual telomere length in immunopathology.

    Science.gov (United States)

    Barkovskaya, M Sh; Bogomolov, A G; Knauer, N Yu; Rubtsov, N B; Kozlov, V A

    2017-04-01

    Telomere length is an important indicator of proliferative cell history and potential. Decreasing telomere length in the cells of an immune system can indicate immune aging in immune-mediated and chronic inflammatory diseases. Quantitative fluorescent in situ hybridization (Q-FISH) of a labeled (C 3 TA[Formula: see text] peptide nucleic acid probe onto fixed metaphase cells followed by digital image microscopy allows the evaluation of telomere length in the arms of individual chromosomes. Computer-assisted analysis of microscopic images can provide quantitative information on the number of telomeric repeats in individual telomeres. We developed new software to estimate telomere length. The MeTeLen software contains new options that can be used to solve some Q-FISH and microscopy problems, including correction of irregular light effects and elimination of background fluorescence. The identification and description of chromosomes and chromosome regions are essential to the Q-FISH technique. To improve the quality of cytogenetic analysis after Q-FISH, we optimized the temperature and time of DNA-denaturation to get better DAPI-banding of metaphase chromosomes. MeTeLen was tested by comparing telomere length estimations for sister chromatids, background fluorescence estimations, and correction of nonuniform light effects. The application of the developed software for analysis of telomere length in patients with rheumatoid arthritis was demonstrated.

  11. Learning curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on actuarial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  12. Exact run length distribution of the double sampling x-bar chart with estimated process parameters

    Directory of Open Access Journals (Sweden)

    Teoh, W. L.

    2016-05-01

    Full Text Available Since the run length distribution is generally highly skewed, a significant concern about focusing too much on the average run length (ARL criterion is that we may miss some crucial information about a control chart’s performance. Thus it is important to investigate the entire run length distribution of a control chart for an in-depth understanding before implementing the chart in process monitoring. In this paper, the percentiles of the run length distribution for the double sampling (DS X chart with estimated process parameters are computed. Knowledge of the percentiles of the run length distribution provides a more comprehensive understanding of the expected behaviour of the run length. This additional information includes the early false alarm, the skewness of the run length distribution, and the median run length (MRL. A comparison of the run length distribution between the optimal ARL-based and MRL-based DS X chart with estimated process parameters is presented in this paper. Examples of applications are given to aid practitioners to select the best design scheme of the DS X chart with estimated process parameters, based on their specific purpose.

  13. Preoperative estimation of tibial nail length--because size does matter.

    LENUS (Irish Health Repository)

    Galbraith, J G

    2012-11-01

    Selecting the correct tibial nail length is essential for satisfactory outcomes. Nails that are inserted and are found to be of inappropriate length should be removed. Accurate preoperative nail estimation has the potential to reduce intra-operative errors, operative time and radiation exposure.

  14. Transport-constrained extensions of collision and track length estimators for solutions of radiative transport problems

    International Nuclear Information System (INIS)

    Kong, Rong; Spanier, Jerome

    2013-01-01

    In this paper we develop novel extensions of collision and track length estimators for the complete space-angle solutions of radiative transport problems. We derive the relevant equations, prove that our new estimators are unbiased, and compare their performance with that of more conventional estimators. Such comparisons based on numerical solutions of simple one dimensional slab problems indicate the the potential superiority of the new estimators for a wide variety of more general transport problems

  15. Cost analysis and estimating tools and techniques

    CERN Document Server

    Nussbaum, Daniel

    1990-01-01

    Changes in production processes reflect the technological advances permeat­ ing our products and services. U. S. industry is modernizing and automating. In parallel, direct labor is fading as the primary cost driver while engineering and technology related cost elements loom ever larger. Traditional, labor-based ap­ proaches to estimating costs are losing their relevance. Old methods require aug­ mentation with new estimating tools and techniques that capture the emerging environment. This volume represents one of many responses to this challenge by the cost analysis profession. The Institute of Cost Analysis (lCA) is dedicated to improving the effective­ ness of cost and price analysis and enhancing the professional competence of its members. We encourage and promote exchange of research findings and appli­ cations between the academic community and cost professionals in industry and government. The 1990 National Meeting in Los Angeles, jointly spo~sored by ICA and the National Estimating Society (NES),...

  16. Population estimation techniques for routing analysis

    International Nuclear Information System (INIS)

    Sathisan, S.K.; Chagari, A.K.

    1994-01-01

    A number of on-site and off-site factors affect the potential siting of a radioactive materials repository at Yucca Mountain, Nevada. Transportation related issues such route selection and design are among them. These involve evaluation of potential risks and impacts, including those related to population. Population characteristics (total population and density) are critical factors in the risk assessment, emergency preparedness and response planning, and ultimately in route designation. This paper presents an application of Geographic Information System (GIS) technology to facilitate such analyses. Specifically, techniques to estimate critical population information are presented. A case study using the highway network in Nevada is used to illustrate the analyses. TIGER coverages are used as the basis for population information at a block level. The data are then synthesized at tract, county and state levels of aggregation. Of particular interest are population estimates for various corridor widths along transport corridors -- ranging from 0.5 miles to 20 miles in this paper. A sensitivity analysis based on the level of data aggregation is also presented. The results of these analysis indicate that specific characteristics of the area and its population could be used as indicators to aggregate data appropriately for the analysis

  17. Node Detection and Internode Length Estimation of Tomato Seedlings Based on Image Analysis and Machine Learning

    Directory of Open Access Journals (Sweden)

    Kyosuke Yamamoto

    2016-07-01

    Full Text Available Seedling vigor in tomatoes determines the quality and growth of fruits and total plant productivity. It is well known that the salient effects of environmental stresses appear on the internode length; the length between adjoining main stem node (henceforth called node. In this study, we develop a method for internode length estimation using image processing technology. The proposed method consists of three steps: node detection, node order estimation, and internode length estimation. This method has two main advantages: (i as it uses machine learning approaches for node detection, it does not require adjustment of threshold values even though seedlings are imaged under varying timings and lighting conditions with complex backgrounds; and (ii as it uses affinity propagation for node order estimation, it can be applied to seedlings with different numbers of nodes without prior provision of the node number as a parameter. Our node detection results show that the proposed method can detect 72% of the 358 nodes in time-series imaging of three seedlings (recall = 0.72, precision = 0.78. In particular, the application of a general object recognition approach, Bag of Visual Words (BoVWs, enabled the elimination of many false positives on leaves occurring in the image segmentation based on pixel color, significantly improving the precision. The internode length estimation results had a relative error of below 15.4%. These results demonstrate that our method has the ability to evaluate the vigor of tomato seedlings quickly and accurately.

  18. Prediction of Monte Carlo errors by a theory generalized to treat track-length estimators

    International Nuclear Information System (INIS)

    Booth, T.E.; Amster, H.J.

    1978-01-01

    Present theories for predicting expected Monte Carlo errors in neutron transport calculations apply to estimates of flux-weighted integrals sampled directly by scoring individual collisions. To treat track-length estimators, the recent theory of Amster and Djomehri is generalized to allow the score distribution functions to depend on the coordinates of two successive collisions. It has long been known that the expected track length in a region of phase space equals the expected flux integrated over that region, but that the expected statistical error of the Monte Carlo estimate of the track length is different from that of the flux integral obtained by sampling the sum of the reciprocals of the cross sections for all collisions in the region. These conclusions are shown to be implied by the generalized theory, which provides explicit equations for the expected values and errors of both types of estimators. Sampling expected contributions to the track-length estimator is also treated. Other general properties of the errors for both estimators are derived from the equations and physically interpreted. The actual values of these errors are then obtained and interpreted for a simple specific example

  19. Monte Carlo simulation of prompt γ-ray emission in proton therapy using a specific track length estimator

    International Nuclear Information System (INIS)

    El Kanawati, W; Létang, J M; Sarrut, D; Freud, N; Dauvergne, D; Pinto, M; Testa, É

    2015-01-01

    A Monte Carlo (MC) variance reduction technique is developed for prompt-γ emitters calculations in proton therapy. Prompt-γ emitted through nuclear fragmentation reactions and exiting the patient during proton therapy could play an important role to help monitoring the treatment. However, the estimation of the number and the energy of emitted prompt-γ per primary proton with MC simulations is a slow process. In order to estimate the local distribution of prompt-γ emission in a volume of interest for a given proton beam of the treatment plan, a MC variance reduction technique based on a specific track length estimator (TLE) has been developed. First an elemental database of prompt-γ emission spectra is established in the clinical energy range of incident protons for all elements in the composition of human tissues. This database of the prompt-γ spectra is built offline with high statistics. Regarding the implementation of the prompt-γ TLE MC tally, each proton deposits along its track the expectation of the prompt-γ spectra from the database according to the proton kinetic energy and the local material composition. A detailed statistical study shows that the relative efficiency mainly depends on the geometrical distribution of the track length. Benchmarking of the proposed prompt-γ TLE MC technique with respect to an analogous MC technique is carried out. A large relative efficiency gain is reported, ca. 10 5 . (paper)

  20. On estimation of secret message length in LSB steganography in spatial domain

    Science.gov (United States)

    Fridrich, Jessica; Goljan, Miroslav

    2004-06-01

    In this paper, we present a new method for estimating the secret message length of bit-streams embedded using the Least Significant Bit embedding (LSB) at random pixel positions. We introduce the concept of a weighted stego image and then formulate the problem of determining the unknown message length as a simple optimization problem. The methodology is further refined to obtain more stable and accurate results for a wide spectrum of natural images. One of the advantages of the new method is its modular structure and a clean mathematical derivation that enables elegant estimator accuracy analysis using statistical image models.

  1. Estimation of age structure of fish populations from length-frequency data

    International Nuclear Information System (INIS)

    Kumar, K.D.; Adams, S.M.

    1977-01-01

    A probability model is presented to determine the age structure of a fish population from length-frequency data. It is shown that when the age-length key is available, maximum-likelihood estimates of the age structure can be obtained. When the key is not available, approximate estimates of the age structure can be obtained. The model is used for determination of the age structure of populations of channel catfish and white crappie. Practical applications of the model to impact assessment are discussed

  2. Blood capillary length estimation from three-dimensional microscopic data by image analysis and stereology.

    Science.gov (United States)

    Kubínová, Lucie; Mao, Xiao Wen; Janáček, Jiří

    2013-08-01

    Studies of the capillary bed characterized by its length or length density are relevant in many biomedical studies. A reliable assessment of capillary length from two-dimensional (2D), thin histological sections is a rather difficult task as it requires physical cutting of such sections in randomized directions. This is often technically demanding, inefficient, or outright impossible. However, if 3D image data of the microscopic structure under investigation are available, methods of length estimation that do not require randomized physical cutting of sections may be applied. Two different rat brain regions were optically sliced by confocal microscopy and resulting 3D images processed by three types of capillary length estimation methods: (1) stereological methods based on a computer generation of isotropic uniform random virtual test probes in 3D, either in the form of spatial grids of virtual "slicer" planes or spherical probes; (2) automatic method employing a digital version of the Crofton relations using the Euler characteristic of planar sections of the binary image; and (3) interactive "tracer" method for length measurement based on a manual delineation in 3D of the axes of capillary segments. The presented methods were compared in terms of their practical applicability, efficiency, and precision.

  3. Technique for the focal-length measurement of positive lenses using Fizeau interferometry

    International Nuclear Information System (INIS)

    Pavan Kumar, Yeddanapudi; Chatterjee, Sanjib

    2009-01-01

    We present what we believe is a new technique for the focal-length measurement of positive lenses using Fizeau interferometery. The technique utilizes the Gaussian lens equation. The image distance is measured interferometrically in terms of the radius of curvature of the image-forming wavefront emerging from the lens. The radii of curvature of the image-forming wavefronts corresponding to two different axial object positions of known separation are measured. The focal length of the lens is determined by solving the equations obtained using the Gaussian lens equation for the two object positions. Results obtained for a corrected doublet lens of a nominal focal length of 200.0 mm with a measurement uncertainty of ±2.5% is presented

  4. Standing Height and its Estimation Utilizing Foot Length Measurements in Adolescents from Western Region in Kosovo

    Directory of Open Access Journals (Sweden)

    Stevo Popović

    2017-10-01

    Full Text Available The purpose of this research is to examine standing height in both Kosovan genders in the Western Region as well as its association with foot length, as an alternative to estimating standing height. A total of 664 individuals (338 male and 326 female participated in this research. The anthropometric measurements were taken according to the protocol of ISAK. The relationships between body height and foot length were determined using simple correlation coefficients at a ninety-five percent confidence interval. A comparison of means of standing height and foot length between genders was performed using a t-test. After that a linear regression analysis were carried out to examine extent to which foot length can reliably predict standing height. Results displayed that Western Kosovan male are 179.71±6.00cm tall and have a foot length of 26.73±1.20cm, while Western Kosovan female are 166.26±5.23cm tall and have a foot length of 23.66±1.06cm. The results have shown that both genders made Western-Kosovans a tall group, a little bit taller that general Kosovan population. Moreover, the foot length reliably predicts standing height in both genders; but, not reliably enough as arm span. This study also confirms the necessity for developing separate height models for each region in Kosovo as the results from Western-Kosovans don’t correspond to the general values.

  5. Estimation and calibration of the water isotope differential diffusion length in ice core records

    NARCIS (Netherlands)

    van der Wel, G.; Fischer, H.; Oerter, H.; Meyer, H.; Meijer, H. A. J.

    2015-01-01

    Palaeoclimatic information can be retrieved from the diffusion of the stable water isotope signal during firnification of snow. The diffusion length, a measure for the amount of diffusion a layer has experienced, depends on the firn temperature and the accumulation rate. We show that the estimation

  6. The Grid Method in Estimating the Path Length of a Moving Animal

    NARCIS (Netherlands)

    Reddingius, J.; Schilstra, A.J.; Thomas, G.

    1983-01-01

    (1) The length of a path covered by a moving animal may be estimated by counting the number of times the animal crosses any line of a grid and applying a conversion factor. (2) Some factors are based on the expected distance through a randomly crossed square; another on the expected crossings of a

  7. Hierarchical Bayesian analysis to incorporate age uncertainty in growth curve analysis and estimates of age from length: Florida manatee (Trichechus manatus) carcasses

    Science.gov (United States)

    Schwarz, L.K.; Runge, M.C.

    2009-01-01

    Age estimation of individuals is often an integral part of species management research, and a number of ageestimation techniques are commonly employed. Often, the error in these techniques is not quantified or accounted for in other analyses, particularly in growth curve models used to describe physiological responses to environment and human impacts. Also, noninvasive, quick, and inexpensive methods to estimate age are needed. This research aims to provide two Bayesian methods to (i) incorporate age uncertainty into an age-length Schnute growth model and (ii) produce a method from the growth model to estimate age from length. The methods are then employed for Florida manatee (Trichechus manatus) carcasses. After quantifying the uncertainty in the aging technique (counts of ear bone growth layers), we fit age-length data to the Schnute growth model separately by sex and season. Independent prior information about population age structure and the results of the Schnute model are then combined to estimate age from length. Results describing the age-length relationship agree with our understanding of manatee biology. The new methods allow us to estimate age, with quantified uncertainty, for 98% of collected carcasses: 36% from ear bones, 62% from length.

  8. [Measurement of screw length through drilling technique in osteosynthesis of the proximal humerus fractures].

    Science.gov (United States)

    Avcı, Cem Coşkun; Gülabi, Deniz; Sağlam, Necdet; Kurtulmuş, Tuhan; Saka, Gürsel

    2013-01-01

    This study aims to investigate the efficacy of screw length measurement through drilling technique on the reduction of intraarticular screw penetration and fluoroscopy time in osteosynthesis of proximal humerus fractures. Between January 2008 and June 2012, 98 patients (34 males, 64 females; mean age 64.4 years; range 35 to 81 years) who underwent osteosynthesis using locking anatomical proximal humerus plates (PHILOS) in our clinic with the diagnosis of Neer type 2, 3 or 4 were included. Two different surgical techniques were used to measure proximal screw length in the plate and patients were divided into two groups based on the technique used. In group 1, screw length was determined by a 3 mm blunt tipped Kirschner wire without fluoroscopic control. In group 2, bilateral fluoroscopic images for each screw at least were obtained. Intraarticular screw penetration was detected in five patients (10.6%) in group 1, and in 19 patients (37.3%) in group 2. The mean fluoroscopic imaging time was 10.6 seconds in group 1 and 24.8 seconds in group 2, indicating a statistically significant difference. Screw length measurement through the drilling technique significantly reduces the intraarticular screw penetration and fluoroscopy time in osteosynthesis of proximal humerus fractures using PHILOS plates.

  9. Estimation of tool wear length in finish milling using a fuzzy inference algorithm

    Science.gov (United States)

    Ko, Tae Jo; Cho, Dong Woo

    1993-10-01

    The geometric accuracy and surface roughness are mainly affected by the flank wear at the minor cutting edge in finish machining. A fuzzy estimator obtained by a fuzzy inference algorithm with a max-min composition rule to evaluate the minor flank wear length in finish milling is introduced. The features sensitive to minor flank wear are extracted from the dispersion analysis of a time series AR model of the feed directional acceleration of the spindle housing. Linguistic rules for fuzzy estimation are constructed using these features, and then fuzzy inferences are carried out with test data sets under various cutting conditions. The proposed system turns out to be effective for estimating minor flank wear length, and its mean error is less than 12%.

  10. Estimation of the flow resistances exerted in coronary arteries using a vessel length-based method.

    Science.gov (United States)

    Lee, Kyung Eun; Kwon, Soon-Sung; Ji, Yoon Cheol; Shin, Eun-Seok; Choi, Jin-Ho; Kim, Sung Joon; Shim, Eun Bo

    2016-08-01

    Flow resistances exerted in the coronary arteries are the key parameters for the image-based computer simulation of coronary hemodynamics. The resistances depend on the anatomical characteristics of the coronary system. A simple and reliable estimation of the resistances is a compulsory procedure to compute the fractional flow reserve (FFR) of stenosed coronary arteries, an important clinical index of coronary artery disease. The cardiac muscle volume reconstructed from computed tomography (CT) images has been used to assess the resistance of the feeding coronary artery (muscle volume-based method). In this study, we estimate the flow resistances exerted in coronary arteries by using a novel method. Based on a physiological observation that longer coronary arteries have more daughter branches feeding a larger mass of cardiac muscle, the method measures the vessel lengths from coronary angiogram or CT images (vessel length-based method) and predicts the coronary flow resistances. The underlying equations are derived from the physiological relation among flow rate, resistance, and vessel length. To validate the present estimation method, we calculate the coronary flow division over coronary major arteries for 50 patients using the vessel length-based method as well as the muscle volume-based one. These results are compared with the direct measurements in a clinical study. Further proving the usefulness of the present method, we compute the coronary FFR from the images of optical coherence tomography.

  11. An unbiased stereological method for efficiently quantifying the innervation of the heart and other organs based on total length estimations

    DEFF Research Database (Denmark)

    Mühlfeld, Christian; Papadakis, Tamara; Krasteva, Gabriela

    2010-01-01

    Quantitative information about the innervation is essential to analyze the structure-function relationships of organs. So far, there has been no unbiased stereological tool for this purpose. This study presents a new unbiased and efficient method to quantify the total length of axons in a given...... reference volume, illustrated on the left ventricle of the mouse heart. The method is based on the following steps: 1) estimation of the reference volume; 2) randomization of location and orientation using appropriate sampling techniques; 3) counting of nerve fiber profiles hit by a defined test area within...

  12. Estimation of roughness lengths and flow separation over compound bedforms in a natural-tidal inlet

    DEFF Research Database (Denmark)

    Lefebvre, Alice; Ernstsen, Verner Brandbyge; Winter, Christian

    2013-01-01

    was found to underestimate the length of the flow separation zone of the primary bedforms. A better estimation of the presence and shape of the flow separation zone over complex bedforms in a tidal environment still needs to be determined; in particular the relationship between flow separation zone......The hydraulic effect of asymmetric compound bedforms on tidal currents was assessed from field measurements of flow velocity in the Knudedyb tidal inlet, Denmark. Large asymmetric bedforms with smaller superimposed ones are a common feature of sandy shallow water environments and are known to act...... as hydraulic roughness elements in dependence with flow direction. The presence of a flow separation zone on the bedform lee was estimated through analysis of the measured velocity directions and the calculation of the flow separation line. The Law of the Wall was used to calculate roughness lengths and shear...

  13. The efficacy of a modified general hip technique in the treatment of leg length discrepancies

    OpenAIRE

    2015-01-01

    M.Dip.Tech. Functional leg length discrepancies, as distinct from anatomical discrepancies, are often associated with sacroiliac joint dysfunction. This may result in back pain and discomfort. Chiropractors usually treat this condition using a side posture sacroiliac adjustment, but in some cases, an adjustment may not be indicated. This study aims to determine whether a Modified General Hip technique would be an acceptable alternative treatment. For this study, 30 patients who suffered fr...

  14. Adaptive Response Surface Techniques in Reliability Estimation

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard

    1993-01-01

    Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces...

  15. INCLUSION RATIO BASED ESTIMATOR FOR THE MEAN LENGTH OF THE BOOLEAN LINE SEGMENT MODEL WITH AN APPLICATION TO NANOCRYSTALLINE CELLULOSE

    Directory of Open Access Journals (Sweden)

    Mikko Niilo-Rämä

    2014-06-01

    Full Text Available A novel estimator for estimating the mean length of fibres is proposed for censored data observed in square shaped windows. Instead of observing the fibre lengths, we observe the ratio between the intensity estimates of minus-sampling and plus-sampling. It is well-known that both intensity estimators are biased. In the current work, we derive the ratio of these biases as a function of the mean length assuming a Boolean line segment model with exponentially distributed lengths and uniformly distributed directions. Having the observed ratio of the intensity estimators, the inverse of the derived function is suggested as a new estimator for the mean length. For this estimator, an approximation of its variance is derived. The accuracies of the approximations are evaluated by means of simulation experiments. The novel method is compared to other methods and applied to real-world industrial data from nanocellulose crystalline.

  16. Two biased estimation techniques in linear regression: Application to aircraft

    Science.gov (United States)

    Klein, Vladislav

    1988-01-01

    Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.

  17. Estimation of Sex From Index and Ring Finger Lengths in An Indigenous Population of Eastern India

    Science.gov (United States)

    Sen, Jaydip; Ghosh, Ahana; Mondal, Nitish; Krishan, Kewal

    2015-01-01

    Introduction Forensic anthropology involves the identification of human remains for medico-legal purposes. Estimation of sex is an essential element of medico-legal investigations when identification of unknown dismembered remains is involved. Aim The present study was conducted with an aim to estimate sex from index and ring finger lengths of adult individuals belonging to an indigenous population of eastern India. Materials and Methods A total of 500 unrelated adult individuals (18-60 years) from the Rajbanshi population (males: 250, females: 250) took part in the study. A total of 400 (males: 200, 200 female) participants were randomly used to develop sex estimation models using Binary Logistic Regression Analysis (BLR). A separate group of 200 adults (18-60 years) from the Karbi tribal population (males 100, females 100) were included to validate the results obtained on the Rajbanshi population. The univarate and bivariate models derived on the study group (n=400) were tested on hold-out sample of Rajbanshi participants (n=100) and the other test population of the Karbi (n=200) participants. Results The results indicate that Index Finger Length (IFL) and Ring Finger Length (RFL) of both hands were significantly longer in males as compared to females. The ring finger was longer than the index finger in both sexes. The study successfully highlights the existence of sex differences in IFL and RFL (p<0.05). No sex differences were however, observed for the index and ring finger ratio. The predictive accuracy of IFL and RFL in sex estimation ranged between 70%-75% (in the hold out sample from the Rajbanshi population) and 60-66% (in the test sample from the Karbi population). A Receiver Operating Curve (ROC) analysis was performed to test the predictive accuracy after predicting the probability of IFL and RFL in sex estimation. The predicted probabilities using ROC analysis were observed to be higher on the left side and in multivariate analysis. Conclusion The

  18. Skeletal height estimation from regression analysis of sternal lengths in a Northwest Indian population of Chandigarh region: a postmortem study.

    Science.gov (United States)

    Singh, Jagmahender; Pathak, R K; Chavali, Krishnadutt H

    2011-03-20

    Skeletal height estimation from regression analysis of eight sternal lengths in the subjects of Chandigarh zone of Northwest India is the topic of discussion in this study. Analysis of eight sternal lengths (length of manubrium, length of mesosternum, combined length of manubrium and mesosternum, total sternal length and first four intercostals lengths of mesosternum) measured from 252 male and 91 female sternums obtained at postmortems revealed that mean cadaver stature and sternal lengths were more in North Indians and males than the South Indians and females. Except intercostal lengths, all the sternal lengths were positively correlated with stature of the deceased in both sexes (P regression analysis of sternal lengths was found more useful than the linear regression for stature estimation. Using multivariate regression analysis, the combined length of manubrium and mesosternum in both sexes and the length of manubrium along with 2nd and 3rd intercostal lengths of mesosternum in males were selected as best estimators of stature. Nonetheless, the stature of males can be predicted with SEE of 6.66 (R(2) = 0.16, r = 0.318) from combination of MBL+BL_3+LM+BL_2, and in females from MBL only, it can be estimated with SEE of 6.65 (R(2) = 0.10, r = 0.318), whereas from the multiple regression analysis of pooled data, stature can be known with SEE of 6.97 (R(2) = 0.387, r = 575) from the combination of MBL+LM+BL_2+TSL+BL_3. The R(2) and F-ratio were found to be statistically significant for almost all the variables in both the sexes, except 4th intercostal length in males and 2nd to 4th intercostal lengths in females. The 'major' sternal lengths were more useful than the 'minor' ones for stature estimation The universal regression analysis used by Kanchan et al. [39] when applied to sternal lengths, gave satisfactory estimates of stature for males only but female stature was comparatively better estimated from simple linear regressions. But they are not proposed for the

  19. Tracer techniques in estimating nuclear materials holdup

    International Nuclear Information System (INIS)

    Pillay, K.K.S.

    1987-01-01

    Residual inventory of nuclear materials remaining in processing facilities (holdup) is recognized as an insidious problem for safety of plant operations and safeguarding of special nuclear materials (SNM). This paper reports on an experimental study where a well-known method of radioanalytical chemistry, namely tracer technique, was successfully used to improve nondestructive measurements of holdup of nuclear materials in a variety of plant equipment. Such controlled measurements can improve the sensitivity of measurements of residual inventories of nuclear materials in process equipment by several orders of magnitude and the good quality data obtained lend themselves to developing mathematical models of holdup of SNM during stable plant operations

  20. Modeling relaxation length and density of acacia mangium wood using gamma - ray attenuation technique

    International Nuclear Information System (INIS)

    Tamer A Tabet; Fauziah Abdul Aziz

    2009-01-01

    Wood density measurement is related to the several factors that influence wood quality. In this paper, density, relaxation length and half-thickness value of eight ages, 3, 5, 7, 10, 11, 13 and 15 year-old of Acacia mangium wood were determined using gamma radiation from 137 Cs source. Results show that Acacia mangium tree of age 3 year has the highest relaxation length of 83.33 cm and least density of 0.43 gcm -3 , while the tree of age 15 year has the least Relaxation length of 28.56 cm and highest density of 0.76 gcm -3 . Results also show that the 3 year-old Acacia mangium wood has the highest half thickness value of 57.75 cm and 15 year-old tree has the least half thickness value of 19.85 cm. Two mathematical models have been developed for the prediction of density, variation with relaxation length and half-thickness value of different age of tree. A good agreement (greater than 85% in most cases) was observed between the measured values and predicted ones. Very good linear correlation was found between measured density and the age of tree (R2 = 0.824), and between estimated density and Acacia mangium tree age (R2 = 0.952). (Author)

  1. Fractal-Based Lightning Channel Length Estimation from Convex-Hull Flash Areas for DC3 Lightning Mapping Array Data

    Science.gov (United States)

    Bruning, Eric C.; Thomas, Ronald J.; Krehbiel, Paul R.; Rison, William; Carey, Larry D.; Koshak, William; Peterson, Harold; MacGorman, Donald R.

    2013-01-01

    We will use VHF Lightning Mapping Array data to estimate NOx per flash and per unit channel length, including the vertical distribution of channel length. What s the best way to find channel length from VHF sources? This paper presents the rationale for the fractal method, which is closely related to the box-covering method.

  2. Sexual Dimorphism and Estimation of Height from Body Length Anthropometric Parameters among the Hausa Ethnic Group of Nigeria

    Directory of Open Access Journals (Sweden)

    Jaafar Aliyu

    2018-01-01

    Full Text Available The study was carried out to investigate the sexual dimorphism in length and other anthropometric parameters. To also generate formulae for height estimation using anthropometric measurements of some length parameters among Hausa ethnic group of Kaduna State, Nigeria. A cross sectional study was conducted and a total of 500 subjects participated in this study which was mainly secondary school students between the age ranges of 16-27 years, anthropometric measurements were obtained using standard protocols. It was observed that there was significant sexual dimorphism in all the parameters except for body mass index. In all the parameters males tend to have significantly (P < 0.05 higher mean values except biaxillary distances. Height showed positive and strongest correlations with demispan length, followed by knee height, thigh length, sitting height, hand length, foot length, humeral length, forearm length and weight respectively. There were weak and positive correlations between height and neck length as well as biaxillary length. The demi span length showed the strongest correlation coefficient and low standard error of estimate indicating the strong estimation ability than other parameters. The combination of two parameters tends to give better estimations and low standard error of estimates, so also combining the three parameters gives better estimations with a lower standard error of estimates. The better correlation coefficient was also observed with the double and triple parameters respectively. Male Hausa tend to have larger body proportion compared to female. Height showed positive and strongest correlations with demispan length. Body length anthropometric proved to be useful in estimation of stature among Hausa ethnic group of Kaduna state Nigeria.

  3. Estimation of diaphragm length in patients with severe chronic obstructive pulmonary disease.

    Science.gov (United States)

    McKenzie, D K; Gorman, R B; Tolman, J; Pride, N B; Gandevia, S C

    2000-11-01

    In patients with advanced chronic obstructive pulmonary disease (COPD) diaphragm function may be compromised because of reduced muscle fibre length. Diaphragm length (L(Di)) can be estimated from measurements of transverse diameter of the rib cage (D(Rc)) and the length of the zone of apposition (L(Zapp)) in healthy subjects, but this method has not been validated in patients with COPD. Postero-anterior chest radiographs were obtained at total lung capacity (TLC), functional residual capacity (FRC) and residual volume (RV) in nine male patients with severe COPD (mean [S.D.]; FEV(1), 23 [6] %pred.; FRC, 199 [15] %pred.). Radiographs taken at TLC were used to identify the lateral costal insertions of the diaphragm (L(Zapp) assumed to approach zero at TLC). L(Di) was measured directly and also estimated from measurements of L(Zapp) and D(Rc) using a prediction equation derived from healthy subjects. The estimation of L(Di) was highly accurate with an intraclass correlation coefficient of 0.93 and 95% CI of approximately +/-8% of the true value. L(Di) decreased from 426 (64) mm at RV to 305 (31) mm at TLC. As there were only small and variable changes in D(Rc) across the lung volume range, most of the L(Di) changes occurred in the zone of apposition. Additional studies showed that measurements of L(Di) from PA and lateral radiographs performed at different lung volumes were tightly correlated. These results suggest that non-invasive measurements of L(Zapp) in the coronal plane (e.g. using ultrasonography) and D(Rc) (e.g. using magnetometers) can be used to provide an accurate estimate of L(Di) in COPD patients.

  4. Parameter estimation techniques for LTP system identification

    Science.gov (United States)

    Nofrarias Serra, Miquel

    LISA Pathfinder (LPF) is the precursor mission of LISA (Laser Interferometer Space Antenna) and the first step towards gravitational waves detection in space. The main instrument onboard the mission is the LTP (LISA Technology Package) whose scientific goal is to test LISA's drag-free control loop by reaching a differential acceleration noise level between two masses in √ geodesic motion of 3 × 10-14 ms-2 / Hz in the milliHertz band. The mission is not only challenging in terms of technology readiness but also in terms of data analysis. As with any gravitational wave detector, attaining the instrument performance goals will require an extensive noise hunting campaign to measure all contributions with high accuracy. But, opposite to on-ground experiments, LTP characterisation will be only possible by setting parameters via telecommands and getting a selected amount of information through the available telemetry downlink. These two conditions, high accuracy and high reliability, are the main restrictions that the LTP data analysis must overcome. A dedicated object oriented Matlab Toolbox (LTPDA) has been set up by the LTP analysis team for this purpose. Among the different toolbox methods, an essential part for the mission are the parameter estimation tools that will be used for system identification during operations: Linear Least Squares, Non-linear Least Squares and Monte Carlo Markov Chain methods have been implemented as LTPDA methods. The data analysis team has been testing those methods with a series of mock data exercises with the following objectives: to cross-check parameter estimation methods and compare the achievable accuracy for each of them, and to develop the best strategies to describe the physics underlying a complex controlled experiment as the LTP. In this contribution we describe how these methods were tested with simulated LTP-like data to recover the parameters of the model and we report on the latest results of these mock data exercises.

  5. Dosimetry techniques applied to thermoluminescent age estimation

    International Nuclear Information System (INIS)

    Erramli, H.

    1986-12-01

    The reliability and the ease of the field application of the measuring techniques of natural radioactivity dosimetry are studied. The natural radioactivity in minerals in composed of the internal dose deposited by alpha and beta radiations issued from the sample itself and the external dose deposited by gamma and cosmic radiations issued from the surroundings of the sample. Two technics for external dosimetry are examined in details. TL Dosimetry and field gamma dosimetry. Calibration and experimental conditions are presented. A new integrated dosimetric method for internal and external dose measure is proposed: the TL dosimeter is placed in the soil in exactly the same conditions as the sample ones, during a time long enough for the total dose evaluation [fr

  6. Radon emanometric technique for 226Ra estimation

    International Nuclear Information System (INIS)

    Mandakini Maharana; Sengupta, D.; Eappen, K.P.

    2010-01-01

    Studies on natural background radiation show that the major contribution of radiation dose received by population is through inhalation pathway vis-a-vis contribution from radon ( 222 Rn) gas. The immediate parent of radon being radium ( 226 Ra), it is imperative that radium content is measured in the various matrices that are present in the environment. Among the various methods available for the measurement of radium, gamma spectrometry and radiochemical method are the two extensively used measurement methods. In comparison with these two methods, the radon emanometric technique, described here, is a simple and convenient method. The paper gives details of sample processing, radon bubbler, Lucas cell and the methodology used in the emanometric method. Comparison of emanometric method with gamma spectrometry has also undertaken and the results for a few soil samples are given. The results show a fairly good agreement among the two methods. (author)

  7. COMPARISON OF RECURSIVE ESTIMATION TECHNIQUES FOR POSITION TRACKING RADIOACTIVE SOURCES

    International Nuclear Information System (INIS)

    Muske, K.; Howse, J.

    2000-01-01

    This paper compares the performance of recursive state estimation techniques for tracking the physical location of a radioactive source within a room based on radiation measurements obtained from a series of detectors at fixed locations. Specifically, the extended Kalman filter, algebraic observer, and nonlinear least squares techniques are investigated. The results of this study indicate that recursive least squares estimation significantly outperforms the other techniques due to the severe model nonlinearity

  8. Labelled antibody techniques in glycoprotein estimation

    International Nuclear Information System (INIS)

    Hazra, D.K.; Ekins, R.P.; Edwards, R.; Williams, E.S.

    1977-01-01

    The problems in the radioimmunoassay of the glycoprotein hormones (pituitary LH, FSH and TSH and human chlorionic gonadotrophin HGG) are reviewed viz: limited specificity and sensitivity in the clinical context, interpretation of disparity between bioassay and radioimmunoassay, and interlaboratory variability. The advantages and limitations of the labelled antibody techniques - classical immonoradiometric methods and 2-site or 125 I-anti-IgG indirect labelling modifications are reviewed in general, and their theoretical potential in glycoprotein assays examined in the light of previous work. Preliminary experiments in the development of coated tube 2-site assay for glycoproteins using 125 I anti-IgG labelling are described, including conditions for maximizing solid phase extraction of the antigen, iodination of anti-IgG, and assay conditions such as effects of temperature of incubation with antigen 'hormonefree serum', heterologous serum and detergent washing. Experiments with extraction and antigen-specific antisera raised in the same or different species are described as exemplified by LH and TSH assay systems, the latter apparently promising greater sensitivity than radioimmunoassay. Proposed experimental and mathematical optimisation and validation of the method as an assay system is outlined, and the areas for further work delineated. (orig.) [de

  9. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  10. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jacob Laigaard

    1991-01-01

    responses simulated by two SDOF ARMA models loaded by the same band-limited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  11. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    1992-01-01

    responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  12. Evaluation of pipeline defect's characteristic axial length via model-based parameter estimation in ultrasonic guided wave-based inspection

    International Nuclear Information System (INIS)

    Wang, Xiaojuan; Tse, Peter W; Dordjevich, Alexandar

    2011-01-01

    The reflection signal from a defect in the process of guided wave-based pipeline inspection usually includes sufficient information to detect and define the defect. In previous research, it has been found that the reflection of guided waves from even a complex defect primarily results from the interference between reflection components generated at the front and the back edges of the defect. The respective contribution of different parameters of a defect to the overall reflection can be affected by the features of the two primary reflection components. The identification of these components embedded in the reflection signal is therefore useful in characterizing the concerned defect. In this research, we propose a method of model-based parameter estimation with the aid of the Hilbert–Huang transform technique for the purpose of decomposition of a reflection signal to enable characterization of the pipeline defect. Once two primary edge reflection components are decomposed and identified, the distance between the reflection positions, which closely relates to the axial length of the defect, could be easily and accurately determined. Considering the irregular profiles of complex pipeline defects at their two edges, which is often the case in real situations, the average of varied axial lengths of such a defect along the circumference of the pipeline is used in this paper as the characteristic value of actual axial length for comparison purpose. The experimental results of artificial defects and real corrosion in sample pipes were considered in this paper to demonstrate the effectiveness of the proposed method

  13. Estimation of MONIN-OBUKHOV length using richardson and bulk richardson number

    International Nuclear Information System (INIS)

    Essa, K.S.M.

    2000-01-01

    The 1996 NOVA atmospheric boundary layer data from North Carolina are used in 30 minute's averages for five days. Because of missing data of friction velocity (u) and sensible heat flux (H), it is urgent to calculate (u*)and (H) using the equations of logarithmic wind speed and net radiation (Briggs [7]), which are considered in this work. It is found that the correlation between the predicted and observed values of (u*) and (H) is 0.88 and 0.86 respectively. A comparison is made of the Monin-Obukhov length scale (L) estimated using Richardson number (R i ) and bulk Richardson number (Ri b ) with L-value computed using formula of (L), it is found that the agreement between the predicted and observed values of (L) is better in the case (L)is estimated from the bulk Richardson number (Ri b ), rather than from the gradient Richarson number(R j )

  14. Estimating the Sensitivity of CLM-Crop to Plant Date and Growing Season Length

    Science.gov (United States)

    Drewniak, B. A.; Kotamarthi, V. R.

    2012-12-01

    The Community Land Model (CLM), the land component of the Community Earth System Model (CESM), is designed to estimate the land surface response to climate through simulated vegetation phenology and soil carbon and nitrogen dynamics. Since human influences play a significant role shaping the land surface, the vegetation has been expanded to include agriculture (CLM-Crop) for three crop types: corn, soybean, and spring wheat. CLM-Crop parameters, which define crop phenology, are optimized against AmeriFlux observations of gross primary productivity, net ecosystem exchange, and stored biomass and carbon, for two sites in the U.S. growing corn and soybean. However, there is uncertainty in the measurements and using a small subset of data to determine model parameters makes validation difficult. In order to account for the differences in plant behavior across climate zones, an input dataset is used to define the planting dates and the length of the growing season. In order to improve model performance, and to understand the impacts of uncertainty from the input data, we evaluate the sensitivity of crop productivity and production against planting date and the length of the growing season. First, CLM-Crop is modified to establish plant date based on temperature trends for the previous 10-day period, constrained against the range of observed planting dates. This new climate-based model is compared with the standard fixed plant dates to determine how sensitive the model is to when seeding occurs, and how comparable the climate calculated plant dates are to the fixed dates. Next, the length of the growing season will be revised to account for an alternative climate. Finally, both the climate-based planting and new growth season will be simulated together. Results of the different model runs will be compared to the standard model and to observations to determine the importance of planting date and growing season length on crop productivity and yield.

  15. Estimates of bottom roughness length and bottom shear stress in South San Francisco Bay, California

    Science.gov (United States)

    Cheng, R.T.; Ling, C.-H.; Gartner, J.W.; Wang, P.-F.

    1999-01-01

    A field investigation of the hydrodynamics and the resuspension and transport of participate matter in a bottom boundary layer was carried out in South San Francisco Bay (South Bay), California, during March-April 1995. Using broadband acoustic Doppler current profilers, detailed measurements of turbulent mean velocity distribution within 1.5 m above bed have been obtained. A global method of data analysis was used for estimating bottom roughness length zo and bottom shear stress (or friction velocities u*). Field data have been examined by dividing the time series of velocity profiles into 24-hour periods and independently analyzing the velocity profile time series by flooding and ebbing periods. The global method of solution gives consistent properties of bottom roughness length zo and bottom shear stress values (or friction velocities u*) in South Bay. Estimated mean values of zo and u* for flooding and ebbing cycles are different. The differences in mean zo and u* are shown to be caused by tidal current flood-ebb inequality, rather than the flooding or ebbing of tidal currents. The bed shear stress correlates well with a reference velocity; the slope of the correlation defines a drag coefficient. Forty-three days of field data in South Bay show two regimes of zo (and drag coefficient) as a function of a reference velocity. When the mean velocity is >25-30 cm s-1, the ln zo (and thus the drag coefficient) is inversely proportional to the reference velocity. The cause for the reduction of roughness length is hypothesized as sediment erosion due to intensifying tidal currents thereby reducing bed roughness. When the mean velocity is <25-30 cm s-1, the correlation between zo and the reference velocity is less clear. A plausible explanation of scattered values of zo under this condition may be sediment deposition. Measured sediment data were inadequate to support this hypothesis, but the proposed hypothesis warrants further field investigation.

  16. Power system dynamic state estimation using prediction based evolutionary technique

    International Nuclear Information System (INIS)

    Basetti, Vedik; Chandel, Ashwani K.; Chandel, Rajeevan

    2016-01-01

    In this paper, a new robust LWS (least winsorized square) estimator is proposed for dynamic state estimation of a power system. One of the main advantages of this estimator is that it has an inbuilt bad data rejection property and is less sensitive to bad data measurements. In the proposed approach, Brown's double exponential smoothing technique has been utilised for its reliable performance at the prediction step. The state estimation problem is solved as an optimisation problem using a new jDE-self adaptive differential evolution with prediction based population re-initialisation technique at the filtering step. This new stochastic search technique has been embedded with different state scenarios using the predicted state. The effectiveness of the proposed LWS technique is validated under different conditions, namely normal operation, bad data, sudden load change, and loss of transmission line conditions on three different IEEE test bus systems. The performance of the proposed approach is compared with the conventional extended Kalman filter. On the basis of various performance indices, the results thus obtained show that the proposed technique increases the accuracy and robustness of power system dynamic state estimation performance. - Highlights: • To estimate the states of the power system under dynamic environment. • The performance of the EKF method is degraded during anomaly conditions. • The proposed method remains robust towards anomalies. • The proposed method provides precise state estimates even in the presence of anomalies. • The results show that prediction accuracy is enhanced by using the proposed model.

  17. Estimate-Merge-Technique-based algorithms to track an underwater ...

    Indian Academy of Sciences (India)

    D V A N Ravi Kumar

    2017-07-04

    Jul 4, 2017 ... In this paper, two novel methods based on the Estimate Merge Technique ... mentioned advantages of the proposed novel methods is shown by carrying out Monte Carlo simulation in .... equations are converted to sequential equations to make ... estimation error and low convergence time) at feasibly high.

  18. Evaluation of mfcc estimation techniques for music similarity

    DEFF Research Database (Denmark)

    Jensen, Jesper Højvang; Christensen, Mads Græsbøll; Murthi, Manohar

    2006-01-01

    Spectral envelope parameters in the form of mel-frequencycepstral coefficients are often used for capturing timbral information of music signals in connection with genre classification applications. In this paper, we evaluate mel-frequencycepstral coefficient (MFCC) estimation techniques, namely...... independent linear prediction and MVDR spectral estimators did not exhibit any statistically significant improvement over MFCCs based on the simpler FFT....

  19. Analytical model and error analysis of arbitrary phasing technique for bunch length measurement

    Science.gov (United States)

    Chen, Qushan; Qin, Bin; Chen, Wei; Fan, Kuanjun; Pei, Yuanji

    2018-05-01

    An analytical model of an RF phasing method using arbitrary phase scanning for bunch length measurement is reported. We set up a statistical model instead of a linear chirp approximation to analyze the energy modulation process. It is found that, assuming a short bunch (σφ / 2 π → 0) and small relative energy spread (σγ /γr → 0), the energy spread (Y =σγ 2) at the exit of the traveling wave linac has a parabolic relationship with the cosine value of the injection phase (X = cosφr|z=0), i.e., Y = AX2 + BX + C. Analogous to quadrupole strength scanning for emittance measurement, this phase scanning method can be used to obtain the bunch length by measuring the energy spread at different injection phases. The injection phases can be randomly chosen, which is significantly different from the commonly used zero-phasing method. Further, the systematic error of the reported method, such as the influence of the space charge effect, is analyzed. This technique will be especially useful at low energies when the beam quality is dramatically degraded and is hard to measure using the zero-phasing method.

  20. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  1. Noise Attenuation Estimation for Maximum Length Sequences in Deconvolution Process of Auditory Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Xian Peng

    2017-01-01

    Full Text Available The use of maximum length sequence (m-sequence has been found beneficial for recovering both linear and nonlinear components at rapid stimulation. Since m-sequence is fully characterized by a primitive polynomial of different orders, the selection of polynomial order can be problematic in practice. Usually, the m-sequence is repetitively delivered in a looped fashion. Ensemble averaging is carried out as the first step and followed by the cross-correlation analysis to deconvolve linear/nonlinear responses. According to the classical noise reduction property based on additive noise model, theoretical equations have been derived in measuring noise attenuation ratios (NARs after the averaging and correlation processes in the present study. A computer simulation experiment was conducted to test the derived equations, and a nonlinear deconvolution experiment was also conducted using order 7 and 9 m-sequences to address this issue with real data. Both theoretical and experimental results show that the NAR is essentially independent of the m-sequence order and is decided by the total length of valid data, as well as stimulation rate. The present study offers a guideline for m-sequence selections, which can be used to estimate required recording time and signal-to-noise ratio in designing m-sequence experiments.

  2. Estimation of airway smooth muscle stiffness changes due to length oscillation using artificial neural network.

    Science.gov (United States)

    Al-Jumaily, Ahmed; Chen, Leizhi

    2012-10-07

    This paper presents a novel approach to estimate stiffness changes in airway smooth muscles due to external oscillation. Artificial neural networks are used to model the stiffness changes due to cyclic stretches of the smooth muscles. The nonlinear relationship between stiffness ratios and oscillation frequencies is modeled by a feed-forward neural network (FNN) model. The structure of the FNN is selected through the training and validation using literature data from 11 experiments with different muscle lengths, muscle masses, oscillation frequencies and amplitudes. Data pre-processing methods are used to improve the robustness of the neural network model to match the non-linearity. The validation results show that the FNN model can predict the stiffness ratio changes with a mean square error of 0.0042. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Aerodynamic roughness length estimation from very high-resolution imaging LIDAR observations over the Heihe basin in China

    Directory of Open Access Journals (Sweden)

    J. Colin

    2010-12-01

    Full Text Available Roughness length of land surfaces is an essential variable for the parameterisation of momentum and heat exchanges. The growing interest in the estimation of the surface turbulent flux parameterisation from passive remote sensing leads to an increasing development of models, and the common use of simple semi-empirical formulations to estimate surface roughness. Over complex surface land cover, these approaches would benefit from the combined use of passive remote sensing and land surface structure measurements from Light Detection And Ranging (LIDAR techniques. Following early studies based on LIDAR profile data, this paper explores the use of imaging LIDAR measurements for the estimation of the aerodynamic roughness length over a heterogeneous landscape of the Heihe river basin, a typical inland river basin in the northwest of China. The point cloud obtained from multiple flight passes over an irrigated farmland area were used to separate the land surface topography and the vegetation canopy into a Digital Elevation Model (DEM and a Digital Surface Model (DSM respectively. These two models were then incorporated in two approaches: (i a strictly geometrical approach based on the calculation of the plan surface density and the frontal surface density to derive a geometrical surface roughness; (ii a more aerodynamic approach where both the DEM and DSM are introduced in a Computational Fluid Dynamics model (CFD. The inversion of the resulting 3-D wind field leads to a fine representation of the aerodynamic surface roughness. Examples of the use of these three approaches are presented for various wind directions together with a cross-comparison of results on heterogeneous land cover and complex roughness element structures.

  4. Comparative study on direct and indirect bracket bonding techniques regarding time length and bracket detachment

    Directory of Open Access Journals (Sweden)

    Jefferson Vinicius Bozelli

    2013-12-01

    Full Text Available OBJECTIVE: The aim of this study was to assess the time spent for direct (DBB - direct bracket bonding and indirect (IBB - indirect bracket bonding bracket bonding techniques. The time length of laboratorial (IBB and clinical steps (DBB and IBB as well as the prevalence of loose bracket after a 24-week follow-up were evaluated. METHODS: Seventeen patients (7 men and 10 women with a mean age of 21 years, requiring orthodontic treatment were selected for this study. A total of 304 brackets were used (151 DBB and 153 IBB. The same bracket type and bonding material were used in both groups. Data were submitted to statistical analysis by Wilcoxon non-parametric test at 5% level of significance. RESULTS: Considering the total time length, the IBB technique was more time-consuming than the DBB (p < 0.001. However, considering only the clinical phase, the IBB took less time than the DBB (p < 0.001. There was no significant difference (p = 0.910 for the time spent during laboratorial positioning of the brackets and clinical session for IBB in comparison to the clinical procedure for DBB. Additionally, no difference was found as for the prevalence of loose bracket between both groups. CONCLUSION: the IBB can be suggested as a valid clinical procedure since the clinical session was faster and the total time spent for laboratorial positioning of the brackets and clinical procedure was similar to that of DBB. In addition, both approaches resulted in similar frequency of loose bracket.

  5. Estimating response times of Vadret da Morteratsch, Vadret da Palue, Briksdalsbreen and Nigardsbreen from their length records

    NARCIS (Netherlands)

    Oerlemans, J.

    2007-01-01

    Length records of two pairs of glaciers are used to reconstruct the equilibrium-line altitude (ELA) and to estimate glacier response times. The method is based on the assumption that neighbouring glaciers should be subject to the same climatic forcing, and that differences in the length records are

  6. Estimation of tissue stiffness, reflex activity, optimal muscle length and slack length in stroke patients using an electromyography driven antagonistic wrist model.

    Science.gov (United States)

    de Gooijer-van de Groep, Karin L; de Vlugt, Erwin; van der Krogt, Hanneke J; Helgadóttir, Áróra; Arendzen, J Hans; Meskers, Carel G M; de Groot, Jurriaan H

    2016-06-01

    About half of all chronic stroke patients experience loss of arm function coinciding with increased stiffness, reduced range of motion and a flexed wrist due to a change in neural and/or structural tissue properties. Quantitative assessment of these changes is of clinical importance, yet not trivial. The goal of this study was to quantify the neural and structural properties contributing to wrist joint stiffness and to compare these properties between healthy subjects and stroke patients. Stroke patients (n=32) and healthy volunteers (n=14) were measured using ramp-and-hold rotations applied to the wrist joint by a haptic manipulator. Neural (reflexive torque) and structural (connective tissue stiffness and slack lengths and (contractile) optimal muscle lengths) parameters were estimated using an electromyography driven antagonistic wrist model. Kruskal-Wallis analysis with multiple comparisons was used to compare results between healthy subjects, stroke patients with modified Ashworth score of zero and stroke patients with modified Ashworth score of one or more. Stroke patients with modified Ashworth score of one or more differed from healthy controls (Pslack length of connective tissue of the flexor muscles. Non-invasive quantitative analysis, including estimation of optimal muscle lengths, enables to identify neural and non-neural changes in chronic stroke patients. Monitoring these changes in time is important to understand the recovery process and to optimize treatment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Field Test of Gopher Tortoise (Gopherus Polyphemus) Population Estimation Techniques

    Science.gov (United States)

    2008-04-01

    Web (WWW) at URL: http://www.cecer.army.mil ERDC/CERL TR-08-7 4 2 Field Tests The gopher tortoise is a species of conservation concern in the... ncv D ⎛ ⎞⎛ ⎞ = ⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ (4) where: L = estimate of line length to be sampled b = dispersion parameter 2ˆ( )tcv D = desired coefficient of

  8. New techniques for designing the initial and reload cores with constant long cycle lengths

    International Nuclear Information System (INIS)

    Shi, Jun; Levine, Samuel; Ivanov, Kostadin

    2017-01-01

    Highlights: • New techniques for designing the initial and reload cores with constant long cycle lengths are developed. • Core loading pattern (LP) calculations and comparisons have been made on two different designs. • Results show that significant savings in fuel costs can be accrued if a non-low leakage LP design strategy is enacted. - Abstract: Several utilities have increased the output power of their nuclear power plant to increase their income and profit. Thus, the utility increases the power density of the reactor, which has other consequences. One consequence is to increase the depletion of the fuel assemblies (FAs) and reduce the end-of-cycle (EOC) sum of fissionable nuclides in each FA, ∑_E_O_C. The power density and the ∑_E_O_C remaining in the FAs at EOC must be sufficiently large in many FAs when designing the loading pattern, LP, for the first and reload cycles to maintain constant cycle lengths at minimum fuel cost. Also of importance is the cycle length as well as several other factors. In fact, the most important result of this study is to understand that the ∑_E_O_Cs in the FAs must be such that in the next cycle they can sustain the energy during depletion to prevent too much power shifting to the fresh FAs and, thus, sending the maximum peak pin power, PPP_m_a_x, above its constraint. This paper presents new methods for designing the LPs for the initial and follow on cycles to minimize the fuel costs. Studsvik’s CMS code system provides a 1000 MWe LP design in their sample inputs, which is applied in this study. The first 3 cycles of this core are analyzed to minimize fuel costs, and all three cycles have the same cycle length of ∼650 days. Cycle 1 is designed to allow many used FAs to be loaded into cycles 2 and 3 to reduce their fuel costs. This could not be achieved if cycle 1 was a low leakage LP (Shi et al., 2015). Significant fuel cost savings are achieved when the new designs are applied to the higher leakage LP designs

  9. Quantitative CT: technique dependence of volume estimation on pulmonary nodules

    Science.gov (United States)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan

    2012-03-01

    Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.

  10. Uncertainties estimation in surveying measurands: application to lengths, perimeters and areas

    Science.gov (United States)

    Covián, E.; Puente, V.; Casero, M.

    2017-10-01

    The present paper develops a series of methods for the estimation of uncertainty when measuring certain measurands of interest in surveying practice, such as points elevation given a planimetric position within a triangle mesh, 2D and 3D lengths (including perimeters enclosures), 2D areas (horizontal surfaces) and 3D areas (natural surfaces). The basis for the proposed methodology is the law of propagation of variance-covariance, which, applied to the corresponding model for each measurand, allows calculating the resulting uncertainty from known measurement errors. The methods are tested first in a small example, with a limited number of measurement points, and then in two real-life measurements. In addition, the proposed methods have been incorporated to commercial software used in the field of surveying engineering and focused on the creation of digital terrain models. The aim of this evolution is, firstly, to comply with the guidelines of the BIPM (Bureau International des Poids et Mesures), as the international reference agency in the field of metrology, in relation to the determination and expression of uncertainty; and secondly, to improve the quality of the measurement by indicating the uncertainty associated with a given level of confidence. The conceptual and mathematical developments for the uncertainty estimation in the aforementioned cases were conducted by researchers from the AssIST group at the University of Oviedo, eventually resulting in several different mathematical algorithms implemented in the form of MATLAB code. Based on these prototypes, technicians incorporated the referred functionality to commercial software, developed in C++. As a result of this collaboration, in early 2016 a new version of this commercial software was made available, which will be the first, as far as the authors are aware, that incorporates the possibility of estimating the uncertainty for a given level of confidence when computing the aforementioned surveying

  11. Estimating age from recapture data: integrating incremental growth measures with ancillary data to infer age-at-length

    Science.gov (United States)

    Eaton, Mitchell J.; Link, William A.

    2011-01-01

    Estimating the age of individuals in wild populations can be of fundamental importance for answering ecological questions, modeling population demographics, and managing exploited or threatened species. Significant effort has been devoted to determining age through the use of growth annuli, secondary physical characteristics related to age, and growth models. Many species, however, either do not exhibit physical characteristics useful for independent age validation or are too rare to justify sacrificing a large number of individuals to establish the relationship between size and age. Length-at-age models are well represented in the fisheries and other wildlife management literature. Many of these models overlook variation in growth rates of individuals and consider growth parameters as population parameters. More recent models have taken advantage of hierarchical structuring of parameters and Bayesian inference methods to allow for variation among individuals as functions of environmental covariates or individual-specific random effects. Here, we describe hierarchical models in which growth curves vary as individual-specific stochastic processes, and we show how these models can be fit using capture–recapture data for animals of unknown age along with data for animals of known age. We combine these independent data sources in a Bayesian analysis, distinguishing natural variation (among and within individuals) from measurement error. We illustrate using data for African dwarf crocodiles, comparing von Bertalanffy and logistic growth models. The analysis provides the means of predicting crocodile age, given a single measurement of head length. The von Bertalanffy was much better supported than the logistic growth model and predicted that dwarf crocodiles grow from 19.4 cm total length at birth to 32.9 cm in the first year and 45.3 cm by the end of their second year. Based on the minimum size of females observed with hatchlings, reproductive maturity was estimated

  12. Is length an appropriate estimator to characterize pulmonary alveolar capillaries? A critical evaluation in the human lung

    DEFF Research Database (Denmark)

    Mühlfeld, Christian; Weibel, Ewald R.; Hahn, Ute

    2010-01-01

    Stereological estimations of total capillary length have been used to characterize changes in the alveolar capillary network (ACN) during developmental processes or pathophysiological conditions. Here, we analyzed whether length estimations are appropriate to describe the 3D nature of the ACN. Semi...... resulted in a mean of 2,746 km (SD: 722 km). Because of the geometry of the ACN both approaches carry an unpredictable bias. The bias incurred by the design-based approach is proportional to the ratio between radius and length of the capillary segments in the ACN, the number of branching points...... and the winding of the capillaries. The model-based approach is biased because of the real noncylindrical shape of capillaries and the network structure. In conclusion, the estimation of the total length of capillaries in the ACN cannot be recommended as the geometry of the ACN does not fulfill the requirements...

  13. Congestion estimation technique in the optical network unit registration process.

    Science.gov (United States)

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  14. Identification and characterization of some aromatic rice mutants using amplified fragment length polymorphism (AFLP) technique

    International Nuclear Information System (INIS)

    Fahmy, E.M.; Sobieh, S. E. S.; Ayaad, M. H.; El-Gohary, A. A.; Rownak, A.

    2012-12-01

    Accurate identifying of the genotypes is considered one of the most important mechanisms used in the recording or the protection of plant varieties. The investigation was conducted at the experimental form belonging to the egyptian Atomic Energy Authority, Inshas. The aim was to evaluate grain quality characteristics and molecular genetic variation using Amplified Fragment Length Polymorphism (AFLP) technique among six rice genotypes, Egyptian Jasmine aromatic rice cultivar and five aromatic rice mutants in (M3 mutagenic generation). Two mutation (Egy22 and Egy24) were selected from irradiated Sakha 102 population with 200 and 400Gy of gamma rays in the M2 generation, respectively, and three mutations ( Egy32, Egy33, and Egy34) were selected from irradiated Sakha 103 population with 200, 300, 400Gy of gamma rays in the M2 generation, respectively. The obtained results showed that the strong aroma was obtained for mutant Egy22 as compared with Egyptian Jasmine rice cultivar (moderate aroma). Seven primer combinations were used through six rice genotypes on the molecular level using AFLP marker. The size of AFLP Fragments Were Ranged from 51- 494bp. The total number of amplified bands was 997 band among them 919 polymorphic bans representing 92.2%. The highest similarity index (89%) was observed between Egyptian Jasmine and Egy32 followed by (82%) observed between Egyptian Jasmine and Egy34. On the other hand, the lowest similarity index was (48%) between Egyptian Jasmine and Egy24. In six rice genotypes, Egy24 produced the highest number of the AFLP makers giving 49 unique markers (23 positive and 26 negative), then Egy22 showed 23 unique markers (27 positive and 6 negative) while Egy33 was characterized by 17 unique markers (12 positive and 5 negative). At last Egyptian Jasmine was discriminated by the lowest number of markets, 10 (6 positive and 4 negative). The study further confirmed that AFLP technique was able to differentiate rice genotypes by a higher number

  15. Stature estimation from body segment lengths in young adults--application to people with physical disabilities.

    Science.gov (United States)

    Canda, Alicia

    2009-03-01

    Knowledge of stature is necessary for evaluating nutritional status and for correcting certain functional parameters. Measuring stature is difficult or impossible in bedridden or wheelchair-bound persons and may also be diminished by disorders of the spinal column or extremities. The purpose of this work is to develop estimation equations for young adult athletes for their subsequent application to disabled persons. The main sample comprised 445 male and 401 female sportspersons. Cross validation was also performed on 100 males and 101 females. All were Caucasian, the males being over 21 and the females over 18, and all practiced some kind of sport. The following variables were included: stature, sitting height, arm span, and lengths of upper arm, forearm, hand, thigh, lower leg, and foot. Simple and multiple regression analyses were performed using stature as a dependent variable and the others as predictive variables. The best equation for males (R(2)=0.978; RMSE=1.41 cm; PE=1.54 cm) was stature: 1.346+1.023 * lower leg+0.957 * sitting height+0.530 * thigh+0.493 * upper arm+0.228 * forearm. For females (R(2)=0.959; RMSE=1.57 cm; PE=1.25 cm) it was stature: 1.772+0.159 * arm span+0.957 * sitting height+0.424 * thigh+0.966 * lower leg. Alternative equations were developed for when a particular variable cannot be included for reasons of mobility, technical difficulty, or segment loss.

  16. An automated A-value measurement tool for accurate cochlear duct length estimation.

    Science.gov (United States)

    Iyaniwura, John E; Elfarnawany, Mai; Ladak, Hanif M; Agrawal, Sumit K

    2018-01-22

    There has been renewed interest in the cochlear duct length (CDL) for preoperative cochlear implant electrode selection and postoperative generation of patient-specific frequency maps. The CDL can be estimated by measuring the A-value, which is defined as the length between the round window and the furthest point on the basal turn. Unfortunately, there is significant intra- and inter-observer variability when these measurements are made clinically. The objective of this study was to develop an automated A-value measurement algorithm to improve accuracy and eliminate observer variability. Clinical and micro-CT images of 20 cadaveric cochleae specimens were acquired. The micro-CT of one sample was chosen as the atlas, and A-value fiducials were placed onto that image. Image registration (rigid affine and non-rigid B-spline) was applied between the atlas and the 19 remaining clinical CT images. The registration transform was applied to the A-value fiducials, and the A-value was then automatically calculated for each specimen. High resolution micro-CT images of the same 19 specimens were used to measure the gold standard A-values for comparison against the manual and automated methods. The registration algorithm had excellent qualitative overlap between the atlas and target images. The automated method eliminated the observer variability and the systematic underestimation by experts. Manual measurement of the A-value on clinical CT had a mean error of 9.5 ± 4.3% compared to micro-CT, and this improved to an error of 2.7 ± 2.1% using the automated algorithm. Both the automated and manual methods correlated significantly with the gold standard micro-CT A-values (r = 0.70, p value measurement tool using atlas-based registration methods was successfully developed and validated. The automated method eliminated the observer variability and improved accuracy as compared to manual measurements by experts. This open-source tool has the potential to benefit

  17. Minimum K-S estimator using PH-transform technique

    Directory of Open Access Journals (Sweden)

    Somchit Boonthiem

    2016-07-01

    Full Text Available In this paper, we propose an improvement of the Minimum Kolmogorov-Smirnov (K-S estimator using proportional hazards transform (PH-transform technique. The data of experiment is 47 fire accidents data of an insurance company in Thailand. This experiment has two operations, the first operation, we minimize K-S statistic value using grid search technique for nine distributions; Rayleigh distribution, gamma distribution, Pareto distribution, log-logistic distribution, logistic distribution, normal distribution, Weibull distribution, lognormal distribution, and exponential distribution and the second operation, we improve K-S statistic using PHtransform. The result appears that PH-transform technique can improve the Minimum K-S estimator. The algorithms give better the Minimum K-S estimator for seven distributions; Rayleigh distribution, logistic distribution, gamma distribution, Pareto distribution, log-logistic distribution, normal distribution, Weibull distribution, log-normal distribution, and exponential distribution while the Minimum K-S estimators of normal distribution and logistic distribution are unchanged

  18. Estimating length of avian incubation and nestling stages in afrotropical forest birds from interval-censored nest records

    Science.gov (United States)

    Stanley, T.R.; Newmark, W.D.

    2010-01-01

    In the East Usambara Mountains in northeast Tanzania, research on the effects of forest fragmentation and disturbance on nest survival in understory birds resulted in the accumulation of 1,002 nest records between 2003 and 2008 for 8 poorly studied species. Because information on the length of the incubation and nestling stages in these species is nonexistent or sparse, our objectives in this study were (1) to estimate the length of the incubation and nestling stage and (2) to compute nest survival using these estimates in combination with calculated daily survival probability. Because our data were interval censored, we developed and applied two new statistical methods to estimate stage length. In the 8 species studied, the incubation stage lasted 9.6-21.8 days and the nestling stage 13.9-21.2 days. Combining these results with estimates of daily survival probability, we found that nest survival ranged from 6.0% to 12.5%. We conclude that our methodology for estimating stage lengths from interval-censored nest records is a reasonable and practical approach in the presence of interval-censored data. ?? 2010 The American Ornithologists' Union.

  19. A new estimation technique of sovereign default risk

    Directory of Open Access Journals (Sweden)

    Mehmet Ali Soytaş

    2016-12-01

    Full Text Available Using the fixed-point theorem, sovereign default models are solved by numerical value function iteration and calibration methods, which due to their computational constraints, greatly limits the models' quantitative performance and foregoes its country-specific quantitative projection ability. By applying the Hotz-Miller estimation technique (Hotz and Miller, 1993- often used in applied microeconometrics literature- to dynamic general equilibrium models of sovereign default, one can estimate the ex-ante default probability of economies, given the structural parameter values obtained from country-specific business-cycle statistics and relevant literature. Thus, with this technique we offer an alternative solution method to dynamic general equilibrium models of sovereign default to improve upon their quantitative inference ability.

  20. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.

    2010-11-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine the location of the source using the direct and the relayed signal at the destination. We derive the Cramer-Rao lower bound (CRLB) expressions separately for x and y coordinates of the location estimate. We analyze the effects of cognitive behaviour of the relay on the performance of the proposed method. We also discuss and quantify the reliability of the location estimate using the proposed technique if the source is not stationary. The overall performance of the proposed method is presented through simulations. ©2010 IEEE.

  1. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  2. The effect of epoch length on estimated EEG functional connectivity and brain network organisation

    Science.gov (United States)

    Fraschini, Matteo; Demuru, Matteo; Crobe, Alessandra; Marrosu, Francesco; Stam, Cornelis J.; Hillebrand, Arjan

    2016-06-01

    Objective. Graph theory and network science tools have revealed fundamental mechanisms of functional brain organization in resting-state M/EEG analysis. Nevertheless, it is still not clearly understood how several methodological aspects may bias the topology of the reconstructed functional networks. In this context, the literature shows inconsistency in the chosen length of the selected epochs, impeding a meaningful comparison between results from different studies. Approach. The aim of this study was to provide a network approach insensitive to the effects that epoch length has on functional connectivity and network reconstruction. Two different measures, the phase lag index (PLI) and the amplitude envelope correlation (AEC) were applied to EEG resting-state recordings for a group of 18 healthy volunteers using non-overlapping epochs with variable length (1, 2, 4, 6, 8, 10, 12, 14 and 16 s). Weighted clustering coefficient (CCw), weighted characteristic path length (L w) and minimum spanning tree (MST) parameters were computed to evaluate the network topology. The analysis was performed on both scalp and source-space data. Main results. Results from scalp analysis show a decrease in both mean PLI and AEC values with an increase in epoch length, with a tendency to stabilize at a length of 12 s for PLI and 6 s for AEC. Moreover, CCw and L w show very similar behaviour, with metrics based on AEC more reliable in terms of stability. In general, MST parameters stabilize at short epoch lengths, particularly for MSTs based on PLI (1-6 s versus 4-8 s for AEC). At the source-level the results were even more reliable, with stability already at 1 s duration for PLI-based MSTs. Significance. The present work suggests that both PLI and AEC depend on epoch length and that this has an impact on the reconstructed network topology, particularly at the scalp-level. Source-level MST topology is less sensitive to differences in epoch length, therefore enabling the comparison of brain

  3. Estimation of fatigue life using electromechanical impedance technique

    Science.gov (United States)

    Lim, Yee Yan; Soh, Chee Kiong

    2010-04-01

    Fatigue induced damage is often progressive and gradual in nature. Structures subjected to large number of fatigue load cycles will encounter the process of progressive crack initiation, propagation and finally fracture. Monitoring of structural health, especially for the critical components, is therefore essential for early detection of potential harmful crack. Recent advent of smart materials such as piezo-impedance transducer adopting the electromechanical impedance (EMI) technique and wave propagation technique are well proven to be effective in incipient damage detection and characterization. Exceptional advantages such as autonomous, real-time and online, remote monitoring may provide a cost-effective alternative to the conventional structural health monitoring (SHM) techniques. In this study, the main focus is to investigate the feasibility of characterizing a propagating fatigue crack in a structure using the EMI technique as well as estimating its remaining fatigue life using the linear elastic fracture mechanics (LEFM) approach. Uniaxial cyclic tensile load is applied on a lab-sized aluminum beam up to failure. Progressive shift in admittance signatures measured by the piezo-impedance transducer (PZT patch) corresponding to increase of loading cycles reflects effectiveness of the EMI technique in tracing the process of fatigue damage progression. With the use of LEFM, prediction of the remaining life of the structure at different cycles of loading is possible.

  4. Biomass estimates of freshwater zooplankton from length-carbon regression equations

    Directory of Open Access Journals (Sweden)

    Patrizia COMOLI

    2000-02-01

    Full Text Available We present length/carbon regression equations of zooplankton species collected from Lake Maggiore (N. Italy during 1992. The results are discussed in terms of the environmental factors, e.g. food availability, predation, controlling biomass production of particle- feeders and predators in the pelagic system of lakes. The marked seasonality in the length-standardized carbon content of Daphnia, and its time-specific trend suggest that from spring onward food availability for Daphnia population may be regarded as a simple decay function. Seasonality does not affect the carbon content/unit length of the two predator Cladocera Leptodora kindtii and Bythotrephes longimanus. Predation is probably the most important regulating factor for the seasonal dynamics of their carbon biomass. The existence of a constant factor to convert the diameter of Conochilus colonies into carbon seems reasonable for an organism whose population comes on quickly and just as quickly disappears.

  5. Estimation of Kubo number and correlation length of fluctuating magnetic fields and pressure in BOUT + + edge pedestal collapse simulation

    Science.gov (United States)

    Kim, Jaewook; Lee, W.-J.; Jhang, Hogun; Kaang, H. H.; Ghim, Y.-C.

    2017-10-01

    Stochastic magnetic fields are thought to be as one of the possible mechanisms for anomalous transport of density, momentum and heat across the magnetic field lines. Kubo number and Chirikov parameter are quantifications of the stochasticity, and previous studies show that perpendicular transport strongly depends on the magnetic Kubo number (MKN). If MKN is smaller than one, diffusion process will follow Rechester-Rosenbluth model; whereas if it is larger than one, percolation theory dominates the diffusion process. Thus, estimation of Kubo number plays an important role to understand diffusion process caused by stochastic magnetic fields. However, spatially localized experimental measurement of fluctuating magnetic fields in a tokamak is difficult, and we attempt to estimate MKNs using BOUT + + simulation data with pedestal collapse. In addition, we calculate correlation length of fluctuating pressures and Chirikov parameters to investigate variation correlation lengths in the simulation. We, then, discuss how one may experimentally estimate MKNs.

  6. Effect of the Length of Traffic Flow Records on the Estimate of a Bridge Service Life

    Directory of Open Access Journals (Sweden)

    Krejsa Jan

    2016-12-01

    Full Text Available The service life of bridges is significantly affected by fatigue of used material induced by heavy vehicles. Therefore, precise determination of the vehicle weight is of crucial importance for the calculation of fatigue damage and the prediction of the bridge serviceability. This paper investigates accuracy of the determination of fatigue depending on the length of traffic flow recording. The presented data were obtained from the measurements carried out on a bridge of the Prague Highway Ring. The analysis reveals that the optimal length of traffic recording is about 30 days.

  7. Estimation of vertical load on a tire from contact patch length and its use in vehicle stability control

    OpenAIRE

    Dhasarathy, Deepak

    2010-01-01

    The vertical load on a moving tire was estimated by using accelerometers attached to the inner liner of a tire. The acceleration signal was processed to obtain the contact patch length created by the tire on the road surface. Then an appropriate equation relating the patch length to the vertical load is used to calculate the load. In order to obtain the needed data, tests were performed on a flat-track test machine at the Goodyear Innovation Center in Akron, Ohio; tests were also conducted on...

  8. Estimation of Length and Order of Polynomial-based Filter Implemented in the Form of Farrow Structure

    Directory of Open Access Journals (Sweden)

    S. Vukotic

    2016-08-01

    Full Text Available Digital polynomial-based interpolation filters implemented using the Farrow structure are used in Digital Signal Processing (DSP to calculate the signal between its discrete samples. The two basic design parameters for these filters are number of polynomial-segments defining the finite length of impulse response, and order of polynomials in each polynomial segment. The complexity of the implementation structure and the frequency domain performance depend on these two parameters. This contribution presents estimation formulae for length and polynomial order of polynomial-based filters for various types of requirements including attenuation in stopband, width of transitions band, deviation in passband, weighting in passband/stopband.

  9. Quantitative estimation of Holocene surface salinity variation in the Black Sea using dinoflagellate cyst process length

    DEFF Research Database (Denmark)

    Mertens, Kenneth Neil; Bradley, Lee R.; Takano, Yoshihito

    2012-01-01

    Reconstruction of salinity in the Holocene Black Sea has been an ongoing debate over the past four decades. Here we calibrate summer surface water salinity in the Black Sea, Sea of Azov and Caspian Sea with the process length of the dinoflagellate cyst Lingulodinium machaerophorum. We then apply ...

  10. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Science.gov (United States)

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  11. Learning-curve estimation techniques for nuclear industry

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on acturial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year.

  12. Learning curve estimation techniques for the nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on actuarial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  13. Learning-curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on acturial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  14. Sound Power Estimation by Laser Doppler Vibration Measurement Techniques

    Directory of Open Access Journals (Sweden)

    G.M. Revel

    1998-01-01

    Full Text Available The aim of this paper is to propose simple and quick methods for the determination of the sound power emitted by a vibrating surface, by using non-contact vibration measurement techniques. In order to calculate the acoustic power by vibration data processing, two different approaches are presented. The first is based on the method proposed in the Standard ISO/TR 7849, while the second is based on the superposition theorem. A laser-Doppler scanning vibrometer has been employed for vibration measurements. Laser techniques open up new possibilities in this field because of their high spatial resolution and their non-intrusivity. The technique has been applied here to estimate the acoustic power emitted by a loudspeaker diaphragm. Results have been compared with those from a commercial Boundary Element Method (BEM software and experimentally validated by acoustic intensity measurements. Predicted and experimental results seem to be in agreement (differences lower than 1 dB thus showing that the proposed techniques can be employed as rapid solutions for many practical and industrial applications. Uncertainty sources are addressed and their effect is discussed.

  15. Image Analytical Approach for Needle-Shaped Crystal Counting and Length Estimation

    DEFF Research Database (Denmark)

    Wu, Jian X.; Kucheryavskiy, Sergey V.; Jensen, Linda G.

    2015-01-01

    Estimation of nucleation and crystal growth rates from microscopic information is of critical importance. This can be an especially challenging task if needle growth of crystals is observed. To address this challenge, an image analytical method for counting of needle-shaped crystals and estimating...

  16. ESTIMATION OF BURSTS LENGTH AND DESIGN OF A FIBER DELAY LINE BASED OBS ROUTER

    Directory of Open Access Journals (Sweden)

    RICHA AWASTHI

    2017-03-01

    Full Text Available The demand for higher bandwidth is increasing day by day and this ever growing demand cannot be catered to with current electronic technology. Thus new communication technology like optical communication needs to be used. In the similar context OBS (optical burst switching is considered as next generation data transfer technology. In OBS information is transmitted in forms of optical bursts of variable lengths. However, contention among the bursts is a major problem in OBS system, and for contention resolution defection routing is mostly preferred. However, deflection routing increases delay. In this paper, it is shown that the arrival of very large bursts is rare event, and for moderate burst length the buffering of contending burst can provide very effective solution. However, in case of arrival of large bursts deflection can be used.

  17. ESTIMATION OF INSULATOR CONTAMINATIONS BY MEANS OF REMOTE SENSING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    G. Han

    2016-06-01

    Full Text Available The accurate estimation of deposits adhering on insulators is critical to prevent pollution flashovers which cause huge costs worldwide. The traditional evaluation method of insulator contaminations (IC is based sparse manual in-situ measurements, resulting in insufficient spatial representativeness and poor timeliness. Filling that gap, we proposed a novel evaluation framework of IC based on remote sensing and data mining. Varieties of products derived from satellite data, such as aerosol optical depth (AOD, digital elevation model (DEM, land use and land cover and normalized difference vegetation index were obtained to estimate the severity of IC along with the necessary field investigation inventory (pollution sources, ambient atmosphere and meteorological data. Rough set theory was utilized to minimize input sets under the prerequisite that the resultant set is equivalent to the full sets in terms of the decision ability to distinguish severity levels of IC. We found that AOD, the strength of pollution source and the precipitation are the top 3 decisive factors to estimate insulator contaminations. On that basis, different classification algorithm such as mahalanobis minimum distance, support vector machine (SVM and maximum likelihood method were utilized to estimate severity levels of IC. 10-fold cross-validation was carried out to evaluate the performances of different methods. SVM yielded the best overall accuracy among three algorithms. An overall accuracy of more than 70% was witnessed, suggesting a promising application of remote sensing in power maintenance. To our knowledge, this is the first trial to introduce remote sensing and relevant data analysis technique into the estimation of electrical insulator contaminations.

  18. Blood Capillary Length Estimation from Three-Dimensional Microscopic Data by Image Analysis and Stereology

    Czech Academy of Sciences Publication Activity Database

    Kubínová, Lucie; Mao, X. W.; Janáček, Jiří

    2013-01-01

    Roč. 19, č. 4 (2013), s. 898-906 ISSN 1431-9276 R&D Projects: GA MŠk(CZ) ME09010; GA MŠk(CZ) LH13028; GA ČR(CZ) GAP108/11/0794 Institutional research plan: CEZ:AV0Z5011922 Institutional support: RVO:67985823 Keywords : capillaries * confocal microscopy * image analysis * length * rat brain * stereology Subject RIV: EA - Cell Biology Impact factor: 1.757, year: 2013

  19. A low tritium hydride bed inventory estimation technique

    Energy Technology Data Exchange (ETDEWEB)

    Klein, J.E.; Shanahan, K.L.; Baker, R.A. [Savannah River National Laboratory, Aiken, SC (United States); Foster, P.J. [Savannah River Nuclear Solutions, Aiken, SC (United States)

    2015-03-15

    Low tritium hydride beds were developed and deployed into tritium service in Savannah River Site. Process beds to be used for low concentration tritium gas were not fitted with instrumentation to perform the steady-state, flowing gas calorimetric inventory measurement method. Low tritium beds contain less than the detection limit of the IBA (In-Bed Accountability) technique used for tritium inventory. This paper describes two techniques for estimating tritium content and uncertainty for low tritium content beds to be used in the facility's physical inventory (PI). PI are performed periodically to assess the quantity of nuclear material used in a facility. The first approach (Mid-point approximation method - MPA) assumes the bed is half-full and uses a gas composition measurement to estimate the tritium inventory and uncertainty. The second approach utilizes the bed's hydride material pressure-composition-temperature (PCT) properties and a gas composition measurement to reduce the uncertainty in the calculated bed inventory.

  20. A vendor managed inventory model using continuous approximations for route length estimates and Markov chain modeling for cost estimates

    DEFF Research Database (Denmark)

    Larsen, Christian; Turkensteen, Marcel

    2014-01-01

    be a two-dimensional area or a one-dimensional line structure (corresponding to e.g. a major traffic artery). The expected travel distances across a given number of retailers can now be estimated analytically, using results from the field of continuous approxim ation for two-dimensional areas, or using our...

  1. Estimation of Alpine Skier Posture Using Machine Learning Techniques

    Directory of Open Access Journals (Sweden)

    Bojan Nemec

    2014-10-01

    Full Text Available High precision Global Navigation Satellite System (GNSS measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier’s neck. A key issue is how to estimate other more relevant parameters of the skier’s body, like the center of mass (COM and ski trajectories. Previously, these parameters were estimated by modeling the skier’s body with an inverted-pendulum model that oversimplified the skier’s body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier’s body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing.

  2. Using support vector machines in the multivariate state estimation technique

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Gross, K.C.

    1999-01-01

    One approach to validate nuclear power plant (NPP) signals makes use of pattern recognition techniques. This approach often assumes that there is a set of signal prototypes that are continuously compared with the actual sensor signals. These signal prototypes are often computed based on empirical models with little or no knowledge about physical processes. A common problem of all data-based models is their limited ability to make predictions on the basis of available training data. Another problem is related to suboptimal training algorithms. Both of these potential shortcomings with conventional approaches to signal validation and sensor operability validation are successfully resolved by adopting a recently proposed learning paradigm called the support vector machine (SVM). The work presented here is a novel application of SVM for data-based modeling of system state variables in an NPP, integrated with a nonlinear, nonparametric technique called the multivariate state estimation technique (MSET), an algorithm developed at Argonne National Laboratory for a wide range of nuclear plant applications

  3. A method for estimating age of medieval sub-adults from infancy to adulthood based on long bone length

    DEFF Research Database (Denmark)

    Primeau, Charlotte; Friis, Laila Saidane; Sejrsen, Birgitte

    2016-01-01

    OBJECTIVES: To develop a series of regression equations for estimating age from length of long bones for archaeological sub-adults when aging from dental development cannot be performed. Further, to compare derived ages when using these regression equations, and two other methods. MATERIAL AND ME...... as later than the medieval period, although this would require further testing. The quadratic equations are suggested to yield more accurate ages then using simply linear regression equations. Am J Phys Anthropol, 2015. © 2015 Wiley Periodicals, Inc.......OBJECTIVES: To develop a series of regression equations for estimating age from length of long bones for archaeological sub-adults when aging from dental development cannot be performed. Further, to compare derived ages when using these regression equations, and two other methods. MATERIAL...... AND METHODS: A total of 183 skeletal sub-adults from the Danish medieval period, were aged from radiographic images. Linear regression formulae were then produced for individual bones. Age was then estimated from the femur length using three different methods: equations developed in this study, data based...

  4. Length and volume of morphologically normal kidneys in Korean Children: Ultrasound measurement and estimation using body size

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jun Hwee; Kim, Myung Joon; Lim, Sok Hwan; Lee, Mi Jung [Dept. of Radiology and Research Institute of Radiological Science, Severance Children' s Hospital, Yonsei University College of Medicine, Seoul (Korea, Republic of); Kim, Ji Eun [Biostatistics Collaboration Unit, Yonsei University College of Medicine, Seoul (Korea, Republic of)

    2013-08-15

    To evaluate the relationship between anthropometric measurements and renal length and volume measured with ultrasound in Korean children who have morphologically normal kidneys, and to create simple equations to estimate the renal sizes using the anthropometric measurements. We examined 794 Korean children under 18 years of age including a total of 394 boys and 400 girls without renal problems. The maximum renal length (L) (cm), orthogonal anterior-posterior diameter (D) (cm) and width (W) (cm) of each kidney were measured on ultrasound. Kidney volume was calculated as 0.523 x L x D x W (cm{sup 3}). Anthropometric indices including height (cm), weight (kg) and body mass index (m{sup 2}/kg) were collected through a medical record review. We used linear regression analysis to create simple equations to estimate the renal length and the volume with those anthropometric indices that were mostly correlated with the US-measured renal sizes. Renal length showed the strongest significant correlation with patient height (R2, 0.874 and 0.875 for the right and left kidneys, respectively, p < 0.001). Renal volume showed the strongest significant correlation with patient weight (R2, 0.842 and 0.854 for the right and left kidneys, respectively, p < 0.001). The following equations were developed to describe these relationships with an estimated 95% range of renal length and volume (R2, 0.826-0.884, p < 0.001): renal length = 2.383 + 0.045 x Height (± 1.135) and = 2.374 + 0.047 x Height (± 1.173) for the right and left kidneys, respectively; and renal volume 7.941 + 1.246 x Weight (± 15.920) and = 7.303 + 1.532 x Weight (± 18.704) for the right and left kidneys, respectively. Scatter plots between height and renal length and between weight and renal volume have been established from Korean children and simple equations between them have been developed for use in clinical practice.

  5. Length and volume of morphologically normal kidneys in Korean Children: Ultrasound measurement and estimation using body size

    International Nuclear Information System (INIS)

    Kim, Jun Hwee; Kim, Myung Joon; Lim, Sok Hwan; Lee, Mi Jung; Kim, Ji Eun

    2013-01-01

    To evaluate the relationship between anthropometric measurements and renal length and volume measured with ultrasound in Korean children who have morphologically normal kidneys, and to create simple equations to estimate the renal sizes using the anthropometric measurements. We examined 794 Korean children under 18 years of age including a total of 394 boys and 400 girls without renal problems. The maximum renal length (L) (cm), orthogonal anterior-posterior diameter (D) (cm) and width (W) (cm) of each kidney were measured on ultrasound. Kidney volume was calculated as 0.523 x L x D x W (cm 3 ). Anthropometric indices including height (cm), weight (kg) and body mass index (m 2 /kg) were collected through a medical record review. We used linear regression analysis to create simple equations to estimate the renal length and the volume with those anthropometric indices that were mostly correlated with the US-measured renal sizes. Renal length showed the strongest significant correlation with patient height (R2, 0.874 and 0.875 for the right and left kidneys, respectively, p < 0.001). Renal volume showed the strongest significant correlation with patient weight (R2, 0.842 and 0.854 for the right and left kidneys, respectively, p < 0.001). The following equations were developed to describe these relationships with an estimated 95% range of renal length and volume (R2, 0.826-0.884, p < 0.001): renal length = 2.383 + 0.045 x Height (± 1.135) and = 2.374 + 0.047 x Height (± 1.173) for the right and left kidneys, respectively; and renal volume 7.941 + 1.246 x Weight (± 15.920) and = 7.303 + 1.532 x Weight (± 18.704) for the right and left kidneys, respectively. Scatter plots between height and renal length and between weight and renal volume have been established from Korean children and simple equations between them have been developed for use in clinical practice.

  6. Republic of Georgia estimates for prevalence of drug use: Randomized response techniques suggest under-estimation.

    Science.gov (United States)

    Kirtadze, Irma; Otiashvili, David; Tabatadze, Mzia; Vardanashvili, Irina; Sturua, Lela; Zabransky, Tomas; Anthony, James C

    2018-06-01

    Validity of responses in surveys is an important research concern, especially in emerging market economies where surveys in the general population are a novelty, and the level of social control is traditionally higher. The Randomized Response Technique (RRT) can be used as a check on response validity when the study aim is to estimate population prevalence of drug experiences and other socially sensitive and/or illegal behaviors. To apply RRT and to study potential under-reporting of drug use in a nation-scale, population-based general population survey of alcohol and other drug use. For this first-ever household survey on addictive substances for the Country of Georgia, we used the multi-stage probability sampling of 18-to-64-year-old household residents of 111 urban and 49 rural areas. During the interviewer-administered assessments, RRT involved pairing of sensitive and non-sensitive questions about drug experiences. Based upon the standard household self-report survey estimate, an estimated 17.3% [95% confidence interval, CI: 15.5%, 19.1%] of Georgian household residents have tried cannabis. The corresponding RRT estimate was 29.9% [95% CI: 24.9%, 34.9%]. The RRT estimates for other drugs such as heroin also were larger than the standard self-report estimates. We remain unsure about what is the "true" value for prevalence of using illegal psychotropic drugs in the Republic of Georgia study population. Our RRT results suggest that standard non-RRT approaches might produce 'under-estimates' or at best, highly conservative, lower-end estimates. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Estimating cubic volume of small diameter tree-length logs from ponderosa and lodgepole pine.

    Science.gov (United States)

    Marlin E. Plank; James M. Cahill

    1984-01-01

    A sample of 351 ponderosa pine (Pinus ponderosa Dougl. ex Laws.) and 509 lodgepole pine (Pinus contorta Dougl. ex Loud.) logs were used to evaluate the performance of three commonly used formulas for estimating cubic volume. Smalian's formula, Bruce's formula, and Huber's formula were tested to determine which...

  8. Probabilistic divergence time estimation without branch lengths: dating the origins of dinosaurs, avian flight and crown birds.

    Science.gov (United States)

    Lloyd, G T; Bapst, D W; Friedman, M; Davis, K E

    2016-11-01

    Branch lengths-measured in character changes-are an essential requirement of clock-based divergence estimation, regardless of whether the fossil calibrations used represent nodes or tips. However, a separate set of divergence time approaches are typically used to date palaeontological trees, which may lack such branch lengths. Among these methods, sophisticated probabilistic approaches have recently emerged, in contrast with simpler algorithms relying on minimum node ages. Here, using a novel phylogenetic hypothesis for Mesozoic dinosaurs, we apply two such approaches to estimate divergence times for: (i) Dinosauria, (ii) Avialae (the earliest birds) and (iii) Neornithes (crown birds). We find: (i) the plausibility of a Permian origin for dinosaurs to be dependent on whether Nyasasaurus is the oldest dinosaur, (ii) a Middle to Late Jurassic origin of avian flight regardless of whether Archaeopteryx or Aurornis is considered the first bird and (iii) a Late Cretaceous origin for Neornithes that is broadly congruent with other node- and tip-dating estimates. Demonstrating the feasibility of probabilistic time-scaling further opens up divergence estimation to the rich histories of extinct biodiversity in the fossil record, even in the absence of detailed character data. © 2016 The Authors.

  9. Estimating the Celestial Reference Frame via Intra-Technique Combination

    Science.gov (United States)

    Iddink, Andreas; Artz, Thomas; Halsig, Sebastian; Nothnagel, Axel

    2016-12-01

    One of the primary goals of Very Long Baseline Interferometry (VLBI) is the determination of the International Celestial Reference Frame (ICRF). Currently the third realization of the internationally adopted CRF, the ICRF3, is under preparation. In this process, various optimizations are planned to realize a CRF that does not benefit only from the increased number of observations since the ICRF2 was published. The new ICRF can also benefit from an intra-technique combination as is done for the Terrestrial Reference Frame (TRF). Here, we aim at estimating an optimized CRF by means of an intra-technique combination. The solutions are based on the input to the official combined product of the International VLBI Service for Geodesy and Astrometry (IVS), also providing the radio source parameters. We discuss the differences in the setup using a different number of contributions and investigate the impact on TRF and CRF as well as on the Earth Orientation Parameters (EOPs). Here, we investigate the differences between the combined CRF and the individual CRFs from the different analysis centers.

  10. Correlation Lengths for Estimating the Large-Scale Carbon and Heat Content of the Southern Ocean

    Science.gov (United States)

    Mazloff, M. R.; Cornuelle, B. D.; Gille, S. T.; Verdy, A.

    2018-02-01

    The spatial correlation scales of oceanic dissolved inorganic carbon, heat content, and carbon and heat exchanges with the atmosphere are estimated from a realistic numerical simulation of the Southern Ocean. Biases in the model are assessed by comparing the simulated sea surface height and temperature scales to those derived from optimally interpolated satellite measurements. While these products do not resolve all ocean scales, they are representative of the climate scale variability we aim to estimate. Results show that constraining the carbon and heat inventory between 35°S and 70°S on time-scales longer than 90 days requires approximately 100 optimally spaced measurement platforms: approximately one platform every 20° longitude by 6° latitude. Carbon flux has slightly longer zonal scales, and requires a coverage of approximately 30° by 6°. Heat flux has much longer scales, and thus a platform distribution of approximately 90° by 10° would be sufficient. Fluxes, however, have significant subseasonal variability. For all fields, and especially fluxes, sustained measurements in time are required to prevent aliasing of the eddy signals into the longer climate scale signals. Our results imply a minimum of 100 biogeochemical-Argo floats are required to monitor the Southern Ocean carbon and heat content and air-sea exchanges on time-scales longer than 90 days. However, an estimate of formal mapping error using the current Argo array implies that in practice even an array of 600 floats (a nominal float density of about 1 every 7° longitude by 3° latitude) will result in nonnegligible uncertainty in estimating climate signals.

  11. Dosimetry with semiconductor diodes in the application to the full-length irradiation technique of electrons

    International Nuclear Information System (INIS)

    Madrid G, O. A.; Rivera M, T.

    2012-10-01

    The use of charged particles as electrons for the tumor-like lesions treatment to total surface of skin is not very frequent, the types of fungo id mycosis and cutaneous lymphomas compared with other neoplasms they are relatively scarce, however for the existent cases a non conventional technique should be contemplated as treatment alternative that can reach an effective control. In this work the variables of more influence with ionization chamber and semiconductor diodes are studied for to determine the quality of an electrons beam. (Author)

  12. submitter Estimation of stepping motor current from long distances through cable-length-adaptive piecewise affine virtual sensor

    CERN Document Server

    Oliveri, Alberto; Masi, Alessandro; Storace, Marco

    2015-01-01

    In this paper a piecewise affine virtual sensor is used for the estimation of the motor-side current of hybrid stepper motors, which actuate the LHC (Large Hadron Collider) collimators at CERN. The estimation is performed starting from measurements of the current in the driver, which is connected to the motor by a long cable (up to 720 m). The measured current is therefore affected by noise and ringing phenomena. The proposed method does not require a model of the cable, since it is only based on measured data and can be used with cables of different length. A circuit architecture suitable for FPGA implementation has been designed and the effects of fixed point representation of data are analyzed.

  13. Early cost estimating for road construction projects using multiple regression techniques

    Directory of Open Access Journals (Sweden)

    Ibrahim Mahamid

    2011-12-01

    Full Text Available The objective of this study is to develop early cost estimating models for road construction projects using multiple regression techniques, based on 131 sets of data collected in the West Bank in Palestine. As the cost estimates are required at early stages of a project, considerations were given to the fact that the input data for the required regression model could be easily extracted from sketches or scope definition of the project. 11 regression models are developed to estimate the total cost of road construction project in US dollar; 5 of them include bid quantities as input variables and 6 include road length and road width. The coefficient of determination r2 for the developed models is ranging from 0.92 to 0.98 which indicate that the predicted values from a forecast models fit with the real-life data. The values of the mean absolute percentage error (MAPE of the developed regression models are ranging from 13% to 31%, the results compare favorably with past researches which have shown that the estimate accuracy in the early stages of a project is between ±25% and ±50%.

  14. ESTIMATING A DOSE-RESPONSE RELATIONSHIP BETWEEN LENGTH OF STAY AND FUTURE RECIDIVISM IN SERIOUS JUVENILE OFFENDERS*

    Science.gov (United States)

    Loughran, Thomas A.; Mulvey, Edward P.; Schubert, Carol A.; Fagan, Jeffrey; Piquero, Alex R.; Losoya, Sandra H.

    2009-01-01

    The effect of sanctions on subsequent criminal activity is of central theoretical importance in criminology. A key question for juvenile justice policy is the degree to which serious juvenile offenders respond to sanctions and/or treatment administered by the juvenile court. The policy question germane to this debate is finding the level of confinement within the juvenile justice system that maximizes the public safety and therapeutic benefits of institutional confinement. Unfortunately, research on this issue has been limited with regard to serious juvenile offenders. We use longitudinal data from a large sample of serious juvenile offenders from two large cities to 1) estimate a causal treatment effect of institutional placement, as opposed to probation, on future rate of rearrest and 2) investigate the existence of a marginal effect (i.e., benefit) for longer length of stay once the institutional placement decision had been made. We accomplish the latter by determining a dose-response relationship between the length of stay and future rates of rearrest and self-reported offending. The results suggest that an overall null effect of placement exists on future rates of rearrest or self-reported offending for serious juvenile offenders. We also find that, for the group placed out of the community, it is apparent that little or no marginal benefit exists for longer lengths of stay. Theoretical, empirical, and policy issues are outlined. PMID:20052309

  15. Ventricular Cycle Length Characteristics Estimative of Prolonged RR Interval during Atrial Fibrillation

    Science.gov (United States)

    CIACCIO, EDWARD J.; BIVIANO, ANGELO B.; GAMBHIR, ALOK; EINSTEIN, ANDREW J.; GARAN, HASAN

    2014-01-01

    Background When atrial fibrillation (AF) is incessant, imaging during a prolonged ventricular RR interval may improve image quality. It was hypothesized that long RR intervals could be predicted from preceding RR values. Methods From the PhysioNet database, electrocardiogram RR intervals were obtained from 74 persistent AF patients. An RR interval lengthened by at least 250 ms beyond the immediately preceding RR interval (termed T0 and T1, respectively) was considered prolonged. A two-parameter scatterplot was used to predict the occurrence of a prolonged interval T0. The scatterplot parameters were: (1) RR variability (RRv) estimated as the average second derivative from 10 previous pairs of RR differences, T13–T2, and (2) Tm–T1, the difference between Tm, the mean from T13 to T2, and T1. For each patient, scatterplots were constructed using preliminary data from the first hour. The ranges of parameters 1 and 2 were adjusted to maximize the proportion of prolonged RR intervals within range. These constraints were used for prediction of prolonged RR in test data collected during the second hour. Results The mean prolonged event was 1.0 seconds in duration. Actual prolonged events were identified with a mean positive predictive value (PPV) of 80% in the test set. PPV was >80% in 36 of 74 patients. An average of 10.8 prolonged RR intervals per 60 minutes was correctly identified. Conclusions A method was developed to predict prolonged RR intervals using two parameters and prior statistical sampling for each patient. This or similar methodology may help improve cardiac imaging in many longstanding persistent AF patients. PMID:23998759

  16. Controlled trial of the effect of length, incentives, and follow-up techniques on response to a mailed questionnaire.

    Science.gov (United States)

    Hoffman, S C; Burke, A E; Helzlsouer, K J; Comstock, G W

    1998-11-15

    Mailed questionnaires are an economical method of data collection for epidemiologic studies, but response tends to be lower than for telephone or personal interviews. As part of a follow-up study of volunteers who provided a brief health history and blood sample for a blood specimen bank in 1989, the authors conducted a controlled trial of the effect of length, incentives, and follow-up techniques on response to a mailed questionnaire. Interventions tested included variations on length of the questionnaire, effect of a monetary incentive, and effect of a postcard reminder versus a letter accompanied by a second questionnaire. Response was similar for the short (16-item, 4-page) and long (76-item, 16-page) questionnaire groups. The non-monetary [corrected] incentive did not improve the frequency of response. The second mailing of a questionnaire was significantly better than a postcard reminder in improving responses (23% vs. 10%). It is important to systematically test marketing principles to determine which techniques are effective in increasing response to mailed questionnaires for epidemiologic studies.

  17. Project cost estimation techniques used by most emerging building ...

    African Journals Online (AJOL)

    Keywords: Cost estimation, estimation methods, emerging contractors, tender. Dr Solly Matshonisa .... historical cost data (data from cost accounting records and/ ..... emerging contractors in tendering. Table 13: Use of project risk management versus responsibility: expected. Internal document analysis. Checklist analysis.

  18. Intercomparison of techniques for estimation of sedimentation rate in the Sabah and Sarawak coastal waters

    International Nuclear Information System (INIS)

    Zal U'yun Wan Mahmood; Zaharudin Ahmad; Abdul Kadir Ishak; Che Abd Rahim Mohamed

    2011-01-01

    A total of eight sediment cores with 50 cm length were taken in the Sabah and Sarawak coastal waters using a gravity corer in 2004 to estimate sedimentation rates using four mathematical models of CIC, Shukla-CIC, CRS and ADE. The average of sedimentation rate ranged from 0.24 to 0.48 cm year -1 , which is calculated based on the vertical profile of 210 Pbex in sediment core. The finding also showed that the sedimentation rates derived from four models were generally shown in good agreement with similar or comparable value at some stations. However, based on statistical analysis of paired sample t-test indicated that CIC model was the most accurate, reliable and suitable technique to determine the sedimentation rate in the coastal area. (author)

  19. Estimation of Correlation Functions by the Random DEC Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    The Random Dec Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the most important properties of the technique is given. The review is mainly based on recently achieved results that are still unpublished, or that has just...

  20. A comparison of small-area estimation techniques to estimate selected stand attributes using LiDAR-derived auxiliary variables

    Science.gov (United States)

    Michael E. Goerndt; Vicente J. Monleon; Hailemariam. Temesgen

    2011-01-01

    One of the challenges often faced in forestry is the estimation of forest attributes for smaller areas of interest within a larger population. Small-area estimation (SAE) is a set of techniques well suited to estimation of forest attributes for small areas in which the existing sample size is small and auxiliary information is available. Selected SAE methods were...

  1. MAGNETIC QUENCHING OF TURBULENT DIFFUSIVITY: RECONCILING MIXING-LENGTH THEORY ESTIMATES WITH KINEMATIC DYNAMO MODELS OF THE SOLAR CYCLE

    International Nuclear Information System (INIS)

    Munoz-Jaramillo, Andres; Martens, Petrus C. H.; Nandy, Dibyendu

    2011-01-01

    The turbulent magnetic diffusivity in the solar convection zone is one of the most poorly constrained ingredients of mean-field dynamo models. This lack of constraint has previously led to controversy regarding the most appropriate set of parameters, as different assumptions on the value of turbulent diffusivity lead to radically different solar cycle predictions. Typically, the dynamo community uses double-step diffusivity profiles characterized by low values of diffusivity in the bulk of the convection zone. However, these low diffusivity values are not consistent with theoretical estimates based on mixing-length theory, which suggest much higher values for turbulent diffusivity. To make matters worse, kinematic dynamo simulations cannot yield sustainable magnetic cycles using these theoretical estimates. In this work, we show that magnetic cycles become viable if we combine the theoretically estimated diffusivity profile with magnetic quenching of the diffusivity. Furthermore, we find that the main features of this solution can be reproduced by a dynamo simulation using a prescribed (kinematic) diffusivity profile that is based on the spatiotemporal geometric average of the dynamically quenched diffusivity. This bridges the gap between dynamically quenched and kinematic dynamo models, supporting their usage as viable tools for understanding the solar magnetic cycle.

  2. Intercomparison of methods for the estimation of displacement height and roughness length from single-level eddy covariance data

    Science.gov (United States)

    Graf, Alexander; van de Boer, Anneke; Schüttemeyer, Dirk; Moene, Arnold; Vereecken, Harry

    2013-04-01

    The displacement height d and roughness length z0 are parameters of the logarithmic wind profile and as such these are characteristics of the surface, that are required in a multitude of meteorological modeling applications. Classically, both parameters are estimated from multi-level measurements of wind speed over a terrain sufficiently homogeneous to avoid footprint-induced differences between the levels. As a rule-of thumb, d of a dense, uniform crop or forest canopy is 2/3 to 3/4 of the canopy height h, and z0 about 10% of canopy height in absence of any d. However, the uncertainty of this rule-of-thumb becomes larger if the surface of interest is not "dense and uniform", in which case a site-specific determination is required again. By means of the eddy covariance method, alternative possibilities to determine z0 and d have become available. Various authors report robust results if either several levels of sonic anemometer measurements, or one such level combined with a classic wind profile is used to introduce direct knowledge on the friction velocity into the estimation procedure. At the same time, however, the eddy covariance method to measure various fluxes has superseded the profile method, leaving many current stations without a wind speed profile with enough levels sufficiently far above the canopy to enable the classic estimation of z0 and d. From single-level eddy covariance measurements at one point in time, only one parameter can be estimated, usually z0 while d is assumed to be known. Even so, results tend to scatter considerably. However, it has been pointed out, that the use of multiple points in time providing different stability conditions can enable the estimation of both parameters, if they are assumed constant over the time period regarded. These methods either rely on flux-variance similarity (Weaver 1990 and others following), or on the integrated universal function for momentum (Martano 2000 and others following). In both cases

  3. Nonlinear Filtering Techniques Comparison for Battery State Estimation

    Directory of Open Access Journals (Sweden)

    Aspasia Papazoglou

    2014-09-01

    Full Text Available The performance of estimation algorithms is vital for the correct functioning of batteries in electric vehicles, as poor estimates will inevitably jeopardize the operations that rely on un-measurable quantities, such as State of Charge and State of Health. This paper compares the performance of three nonlinear estimation algorithms: the Extended Kalman Filter, the Unscented Kalman Filter and the Particle Filter, where a lithium-ion cell model is considered. The effectiveness of these algorithms is measured by their ability to produce accurate estimates against their computational complexity in terms of number of operations and execution time required. The trade-offs between estimators' performance and their computational complexity are analyzed.

  4. HLA-DPB1 typing with polymerase chain reaction and restriction fragment length polymorphism technique in Danes

    DEFF Research Database (Denmark)

    Hviid, Thomas Vauvert F.; Madsen, Hans O; Morling, Niels

    1992-01-01

    We have used the polymerase chain reaction (PCR) in combination with the restriction fragment length polymorphism (RFLP) technique for HLA-DBP1 typing. After PCR amplification of the polymorphic second exon of the HLA-DPB1 locus, the PCR product was digested with seven allele-specific restriction...... endonucleases: RsaI, FokI, ApaI, SacI, BstUI, EcoNI, and DdeI, and the DNA fragments were separated by electrophoresis in agarose gels. Altogether, 71 individuals were investigated and 16 different HLA-DPB1 types were observed in 26 different heterozygotic combinations, as well as five possible homozygotes....... Four heterozygotes could not be unequivocally typed with the PCR-RFLP method. The HLA-DPB1 typing results obtained with the PCR-RFLP method were compared with the typing results obtained with PCR allele-specific oligonucleotides (PCR-ASO) in 50 individuals. The results obtained with the two methods...

  5. Empirical evaluation of humpback whale telomere length estimates; quality control and factors causing variability in the singleplex and multiplex qPCR methods

    DEFF Research Database (Denmark)

    Olsen, Morten Tange; Bérubé, Martine; Robbins, Jooke

    2012-01-01

    BACKGROUND:Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent...... steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. RESULTS...... to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. CONCLUSION:Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control...

  6. Cardiac-Specific Conversion Factors to Estimate Radiation Effective Dose From Dose-Length Product in Computed Tomography.

    Science.gov (United States)

    Trattner, Sigal; Halliburton, Sandra; Thompson, Carla M; Xu, Yanping; Chelliah, Anjali; Jambawalikar, Sachin R; Peng, Boyu; Peters, M Robert; Jacobs, Jill E; Ghesani, Munir; Jang, James J; Al-Khalidi, Hussein; Einstein, Andrew J

    2018-01-01

    This study sought to determine updated conversion factors (k-factors) that would enable accurate estimation of radiation effective dose (ED) for coronary computed tomography angiography (CTA) and calcium scoring performed on 12 contemporary scanner models and current clinical cardiac protocols and to compare these methods to the standard chest k-factor of 0.014 mSv·mGy -1 cm -1 . Accurate estimation of ED from cardiac CT scans is essential to meaningfully compare the benefits and risks of different cardiac imaging strategies and optimize test and protocol selection. Presently, ED from cardiac CT is generally estimated by multiplying a scanner-reported parameter, the dose-length product, by a k-factor which was determined for noncardiac chest CT, using single-slice scanners and a superseded definition of ED. Metal-oxide-semiconductor field-effect transistor radiation detectors were positioned in organs of anthropomorphic phantoms, which were scanned using all cardiac protocols, 120 clinical protocols in total, on 12 CT scanners representing the spectrum of scanners from 5 manufacturers (GE, Hitachi, Philips, Siemens, Toshiba). Organ doses were determined for each protocol, and ED was calculated as defined in International Commission on Radiological Protection Publication 103. Effective doses and scanner-reported dose-length products were used to determine k-factors for each scanner model and protocol. k-Factors averaged 0.026 mSv·mGy -1 cm -1 (95% confidence interval: 0.0258 to 0.0266) and ranged between 0.020 and 0.035 mSv·mGy -1 cm -1 . The standard chest k-factor underestimates ED by an average of 46%, ranging from 30% to 60%, depending on scanner, mode, and tube potential. Factors were higher for prospective axial versus retrospective helical scan modes, calcium scoring versus coronary CTA, and higher (100 to 120 kV) versus lower (80 kV) tube potential and varied among scanner models (range of average k-factors: 0.0229 to 0.0277 mSv·mGy -1 cm -1 ). Cardiac k

  7. A new Bayesian recursive technique for parameter estimation

    Science.gov (United States)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  8. Estimating monthly temperature using point based interpolation techniques

    Science.gov (United States)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  9. A technique for estimating maximum harvesting effort in a stochastic ...

    Indian Academy of Sciences (India)

    Unknown

    Estimation of maximum harvesting effort has a great impact on the ... fluctuating environment has been developed in a two-species competitive system, which shows that under realistic .... The existence and local stability properties of the equi-.

  10. Costs of regulatory compliance: categories and estimating techniques

    International Nuclear Information System (INIS)

    Schulte, S.C.; McDonald, C.L.; Wood, M.T.; Cole, R.M.; Hauschulz, K.

    1978-10-01

    Use of the categorization scheme and cost estimating approaches presented in this report can make cost estimates of regulation required compliance activities of value to policy makers. The report describes a uniform assessment framework that when used would assure that cost studies are generated on an equivalent basis. Such normalization would make comparisons of different compliance activity cost estimates more meaningful, thus enabling the relative merits of different regulatory options to be more effectively judged. The framework establishes uniform cost reporting accounts and cost estimating approaches for use in assessing the costs of complying with regulatory actions. The framework was specifically developed for use in a current study at Pacific Northwest Laboratory. However, use of the procedures for other applications is also appropriate

  11. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.; Hussain, Syed Imtiaz; Ç elebi, Hasari Burak; Abdallah, Mohamed M.; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine

  12. Optimum sample length for estimating anchovy size distribution and the proportion of juveniles per fishing set for the Peruvian purse-seine fleet

    Directory of Open Access Journals (Sweden)

    Rocío Joo

    2017-04-01

    Full Text Available The length distribution of catches represents a fundamental source of information for estimating growth and spatio-temporal dynamics of cohorts. The length distribution of caught is estimated based on samples of catched individuals. This work studies the optimum sample size of individuals at each fishing set in order to obtain a representative sample of the length and the proportion of juveniles in the fishing set. For that matter, we use anchovy (Engraulis ringens length data from different fishing sets recorded by observers at-sea from the On-board Observers Program from the Peruvian Marine Research Institute. Finally, we propose an optimum sample size for obtaining robust size and juvenile estimations. Though the application of this work corresponds to the anchovy fishery, the procedure can be applied to any fishery, either for on board or inland biometric measurements.

  13. Parameter estimation in stochastic mammogram model by heuristic optimization techniques.

    NARCIS (Netherlands)

    Selvan, S.E.; Xavier, C.C.; Karssemeijer, N.; Sequeira, J.; Cherian, R.A.; Dhala, B.Y.

    2006-01-01

    The appearance of disproportionately large amounts of high-density breast parenchyma in mammograms has been found to be a strong indicator of the risk of developing breast cancer. Hence, the breast density model is popular for risk estimation or for monitoring breast density change in prevention or

  14. A Novel DOA Estimation Algorithm Using Array Rotation Technique

    Directory of Open Access Journals (Sweden)

    Xiaoyu Lan

    2014-03-01

    Full Text Available The performance of traditional direction of arrival (DOA estimation algorithm based on uniform circular array (UCA is constrained by the array aperture. Furthermore, the array requires more antenna elements than targets, which will increase the size and weight of the device and cause higher energy loss. In order to solve these issues, a novel low energy algorithm utilizing array base-line rotation for multiple targets estimation is proposed. By rotating two elements and setting a fixed time delay, even the number of elements is selected to form a virtual UCA. Then, the received data of signals will be sampled at multiple positions, which improves the array elements utilization greatly. 2D-DOA estimation of the rotation array is accomplished via multiple signal classification (MUSIC algorithms. Finally, the Cramer-Rao bound (CRB is derived and simulation results verified the effectiveness of the proposed algorithm with high resolution and estimation accuracy performance. Besides, because of the significant reduction of array elements number, the array antennas system is much simpler and less complex than traditional array.

  15. Indirect child mortality estimation technique to identify trends of ...

    African Journals Online (AJOL)

    Background: In sub-Saharan African countries, the chance of a child dying before the age of five years is high. The prob- ... of child birth and the age distribution of child mortal- ity11,12. ... value can be estimated from age-specific fertility rates.

  16. Fusion of neural computing and PLS techniques for load estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lu, M.; Xue, H.; Cheng, X. [Northwestern Polytechnical Univ., Xi' an (China); Zhang, W. [Xi' an Inst. of Post and Telecommunication, Xi' an (China)

    2007-07-01

    A method to predict the electric load of a power system in real time was presented. The method is based on neurocomputing and partial least squares (PLS). Short-term load forecasts for power systems are generally determined by conventional statistical methods and Computational Intelligence (CI) techniques such as neural computing. However, statistical modeling methods often require the input of questionable distributional assumptions, and neural computing is weak, particularly in determining topology. In order to overcome the problems associated with conventional techniques, the authors developed a CI hybrid model based on neural computation and PLS techniques. The theoretical foundation for the designed CI hybrid model was presented along with its application in a power system. The hybrid model is suitable for nonlinear modeling and latent structure extracting. It can automatically determine the optimal topology to maximize the generalization. The CI hybrid model provides faster convergence and better prediction results compared to the abductive networks model because it incorporates a load conversion technique as well as new transfer functions. In order to demonstrate the effectiveness of the hybrid model, load forecasting was performed on a data set obtained from the Puget Sound Power and Light Company. Compared with the abductive networks model, the CI hybrid model reduced the forecast error by 32.37 per cent on workday, and by an average of 27.18 per cent on the weekend. It was concluded that the CI hybrid model has a more powerful predictive ability. 7 refs., 1 tab., 3 figs.

  17. A comparison of spatial rainfall estimation techniques: A case study ...

    African Journals Online (AJOL)

    Two geostatistical interpolation techniques (kriging and cokriging) were evaluated against inverse distance weighted (IDW) and global polynomial interpolation (GPI). Of the four spatial interpolators, kriging and cokriging produced results with the least root mean square error (RMSE). A digital elevation model (DEM) was ...

  18. Metrological and reliable characteristics of transducers: estimation techniques

    International Nuclear Information System (INIS)

    Volkov, V.A.; Ryzhakov, V.V.

    1993-01-01

    Methods and techniques of finding the evaluations of metering method error dispersions by different factors, non-linearity of transformation functions, their hysteresis, as well as evaluations of full operating time of long-term use metering means, are presented. A program of static data computer processing is given. 65 refs

  19. DFT-based channel estimation and noise variance estimation techniques for single-carrier FDMA

    OpenAIRE

    Huang, G; Nix, AR; Armour, SMD

    2010-01-01

    Practical frequency domain equalization (FDE) systems generally require knowledge of the channel and the noise variance to equalize the received signal in a frequency-selective fading channel. Accurate channel estimate and noise variance estimate are thus desirable to improve receiver performance. In this paper we investigate the performance of the denoise channel estimator and the approximate linear minimum mean square error (A-LMMSE) channel estimator with channel power delay profile (PDP) ...

  20. Cost Estimation Techniques for C3I System Software.

    Science.gov (United States)

    1984-07-01

    opment manmonth have been determined for maxi, midi , and mini .1 type computers. Small to median size timeshared developments used 0.2 to 1.5 hours...development schedule 1.23 1.00 1.10 2.1.3 Detailed Model The final codification of the COCOMO regressions was the development of separate effort...regardless of the software structure level being estimated: D8VC -- the expected development computer (maxi. midi . mini, micro) MODE -- the expected

  1. A track length estimator method for dose calculations in low-energy X-ray irradiations. Implementation, properties and performance

    Energy Technology Data Exchange (ETDEWEB)

    Baldacci, F.; Delaire, F.; Letang, J.M.; Sarrut, D.; Smekens, F.; Freud, N. [Lyon-1 Univ. - CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Centre Leon Berard (France); Mittone, A.; Coan, P. [LMU Munich (Germany). Dept. of Physics; LMU Munich (Germany). Faculty of Medicine; Bravin, A.; Ferrero, C. [European Synchrotron Radiation Facility, Grenoble (France); Gasilov, S. [LMU Munich (Germany). Dept. of Physics

    2015-05-01

    The track length estimator (TLE) method, an 'on-the-fly' fluence tally in Monte Carlo (MC) simulations, recently implemented in GATE 6.2, is known as a powerful tool to accelerate dose calculations in the domain of low-energy X-ray irradiations using the kerma approximation. Overall efficiency gains of the TLE with respect to analogous MC were reported in the literature for regions of interest in various applications (photon beam radiation therapy, X-ray imaging). The behaviour of the TLE method in terms of statistical properties, dose deposition patterns, and computational efficiency compared to analogous MC simulations was investigated. The statistical properties of the dose deposition were first assessed. Derivations of the variance reduction factor of TLE versus analogous MC were carried out, starting from the expression of the dose estimate variance in the TLE and analogous MC schemes. Two test cases were chosen to benchmark the TLE performance in comparison with analogous MC: (i) a small animal irradiation under stereotactic synchrotron radiation therapy conditions and (ii) the irradiation of a human pelvis during a cone beam computed tomography acquisition. Dose distribution patterns and efficiency gain maps were analysed. The efficiency gain exhibits strong variations within a given irradiation case, depending on the geometrical (voxel size, ballistics) and physical (material and beam properties) parameters on the voxel scale. Typical values lie between 10 and 103, with lower levels in dense regions (bone) outside the irradiated channels (scattered dose only), and higher levels in soft tissues directly exposed to the beams.

  2. Effects of Varying Epoch Lengths, Wear Time Algorithms, and Activity Cut-Points on Estimates of Child Sedentary Behavior and Physical Activity from Accelerometer Data.

    Science.gov (United States)

    Banda, Jorge A; Haydel, K Farish; Davila, Tania; Desai, Manisha; Bryson, Susan; Haskell, William L; Matheson, Donna; Robinson, Thomas N

    2016-01-01

    To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). 268 7-11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4-7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p algorithms (all p algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy.

  3. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    Science.gov (United States)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  4. New Solid Phases for Estimation of Hormones by Radioimmunoassay Technique

    International Nuclear Information System (INIS)

    Sheha, R.R.; Ayoub, H.S.M.; Shafik, M.

    2013-01-01

    The efforts in this study were initiated to develop and validate new solid phases for estimation of hormones by radioimmunoassay (RIA). The study argued the successful application of different hydroxy apatites (HAP) as new solid phases for estimation of Alpha fetoprotein (AFP), Thyroid Stimulating hormone (TSH) and Luteinizing hormone (LH) in human serum. Hydroxy apatites have different alkali earth elements were successfully prepared by a well-controlled co-precipitation method with stoichiometric ratio value 1.67. The synthesized barium and calcium hydroxy apatites were characterized using XRD and Ftir and data clarified the preparation of pure structures of both BaHAP and CaHAP with no evidence on presence of other additional phases. The prepared solid phases were applied in various radioimmunoassay systems for separation of bound and free antigens of AFP, TSH and LH hormones. The preparation of radiolabeled tracer for these antigens was carried out using chloramine-T as oxidizing agent. The influence of different parameters on the activation and coupling of the used apatite particles with the polyclonal antibodies was systematically investigated and the optimum conditions were determined. The assay was reproducible, specific and sensitive enough for regular estimation of the studied hormones. The intra-and inter-assay variation were satisfactory and also the recovery and dilution tests indicated an accurate calibration. The reliability of these apatite particles had been validated by comparing the results that obtained by using commercial kits. The results finally authenticates that hydroxyapatite particles would have a great potential to address the emerging challenge of accurate quantitation in laboratory medical application

  5. Comparison of sampling techniques for Bayesian parameter estimation

    Science.gov (United States)

    Allison, Rupert; Dunkley, Joanna

    2014-02-01

    The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.

  6. Comparative Study of Online Open Circuit Voltage Estimation Techniques for State of Charge Estimation of Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Hicham Chaoui

    2017-04-01

    Full Text Available Online estimation techniques are extensively used to determine the parameters of various uncertain dynamic systems. In this paper, online estimation of the open-circuit voltage (OCV of lithium-ion batteries is proposed by two different adaptive filtering methods (i.e., recursive least square, RLS, and least mean square, LMS, along with an adaptive observer. The proposed techniques use the battery’s terminal voltage and current to estimate the OCV, which is correlated to the state of charge (SOC. Experimental results highlight the effectiveness of the proposed methods in online estimation at different charge/discharge conditions and temperatures. The comparative study illustrates the advantages and limitations of each online estimation method.

  7. Development of flow injection analysis technique for uranium estimation

    International Nuclear Information System (INIS)

    Paranjape, A.H.; Pandit, S.S.; Shinde, S.S.; Ramanujam, A.; Dhumwad, R.K.

    1991-01-01

    Flow injection analysis is increasingly used as a process control analytical technique in many industries. It involves injection of the sample at a constant rate into a steady flowing stream of reagent and passing this mixture through a suitable detector. This paper describes the development of such a system for the analysis of uranium (VI) and (IV) and its gross gamma activity. It is amenable for on-line or automated off-line monitoring of uranium and its activity in process streams. The sample injection port is suitable for automated injection of radioactive samples. The performance of the system has been tested for the colorimetric response of U(VI) samples at 410 nm in the range of 35 to 360mg/ml in nitric acid medium using Metrohm 662 Photometer and a recorder as detector assembly. The precision of the method is found to be better than +/- 0.5%. This technique with certain modifications is used for the analysis of U(VI) in the range 0.1-3mg/ailq. by alcoholic thiocynate procedure within +/- 1.5% precision. Similarly the precision for the determination of U(IV) in the range 15-120 mg at 650 nm is found to be better than 5%. With NaI well-type detector in the flow line, the gross gamma counting of the solution under flow is found to be within a precision of +/- 5%. (author). 4 refs., 2 figs., 1 tab

  8. Estimation of Postmortem Interval Using the Radiological Techniques, Computed Tomography: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Jiulin Wang

    2017-01-01

    Full Text Available Estimation of postmortem interval (PMI has been an important and difficult subject in the forensic study. It is a primary task of forensic work, and it can help guide the work in field investigation. With the development of computed tomography (CT technology, CT imaging techniques are now being more frequently applied to the field of forensic medicine. This study used CT imaging techniques to observe area changes in different tissues and organs of rabbits after death and the changing pattern of the average CT values in the organs. The study analyzed the relationship between the CT values of different organs and PMI with the imaging software Max Viewer and obtained multiparameter nonlinear regression equation of the different organs, and the study provided an objective and accurate method and reference information for the estimation of PMI in the forensic medicine. In forensic science, PMI refers to the time interval between the discovery or inspection of corpse and the time of death. CT, magnetic resonance imaging, and other imaging techniques have become important means of clinical examinations over the years. Although some scholars in our country have used modern radiological techniques in various fields of forensic science, such as estimation of injury time, personal identification of bodies, analysis of the cause of death, determination of the causes of injury, and identification of the foreign substances of bodies, there are only a few studies on the estimation of time of death. We detected the process of subtle changes in adult rabbits after death, the shape and size of tissues and organs, and the relationship between adjacent organs in three-dimensional space in an effort to develop new method for the estimation of PMI. The bodies of the dead rabbits were stored at 20°C room temperature, sealed condition, and prevented exposure to flesh flies. The dead rabbits were randomly divided into comparison group and experimental group. The whole

  9. Comparison of techniques for estimating herbage intake by grazing dairy cows

    NARCIS (Netherlands)

    Smit, H.J.; Taweel, H.Z.; Tas, B.M.; Tamminga, S.; Elgersma, A.

    2005-01-01

    For estimating herbage intake during grazing, the traditional sward cutting technique was compared in grazing experiments in 2002 and 2003 with the recently developed n-alkanes technique and with the net energy method. The first method estimates herbage intake by the difference between the herbage

  10. . Estimating soil contamination from oil spill using neutron backscattering technique

    International Nuclear Information System (INIS)

    Okunade, I.O.; Jonah, S.A.; Abdulsalam, M.O.

    2009-01-01

    An analytical facility which is based on neutron backscattering technique has been adapted for monitoring oil spill. The facility which consists of 1 Ci Am-Be isotopic source and 3 He neutron detector is based on the principle of slowing down of neutrons in a given medium which is dominated by the elastic process with the hydrogen nucleus. Based on this principle, the neutron reflection parameter in the presence of hydrogenous materials such as coal, crude oil and other hydrocarbon materials depends strongly on the number of hydrogen nuclei present. Consequently, the facility has been adapted for quantification of crude oil in soil contaminated in this work. The description of the facility and analytical procedures for quantification of oil spill in soil contaminated with different amount of crude oil are provided

  11. Background estimation techniques in searches for heavy resonances at CMS

    CERN Document Server

    Benato, Lisa

    2017-01-01

    Many Beyond Standard Model theories foresee the existence of heavy resonances (over 1 TeV) decaying into final states that include a high-energetic, boosted jet and charged leptons or neutrinos. In these very peculiar conditions, Monte Carlo predictions are not reliable enough to reproduce accurately the expected Standard Model background. A data-Monte Carlo hybrid approach (alpha method) has been successfully adopted since Run 1 in searches for heavy Higgs bosons performed by the CMS Collaboration. By taking advantage of data in signal-free control regions, determined exploiting the boosted jet substructure, predictions are extracted in the signal region. The alpha method and jet substructure techniques are described in detail, along with some recent results obtained with 2016 Run 2 data collected by the CMS detector.

  12. Measurement of the mass of the top quark using the transverse decay length and lepton transverse momentum techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Christian

    2014-05-02

    A measurement of the mass of the top quark using the transverse momentum of the lepton and decay length of the B-Hadron has been presented. The result is m{sub Top}=(170.4±1.1{sub stat.}±2.3{sub syst.}) GeV. This is compatible with previous measurements of the mass of the top quark, done by either the ATLAS collaboration or other experiments. The total uncertainty on the result of this analysis, Δ{sup total}m{sub Top}=2.6 GeV is larger than results by other measurements. However, with an jet energy scale uncertainty of only Δ{sup Jes}m{sub Top}=0.3 GeV it has one of the smallest uncertainties caused by this source. In a combination of results this will help reducing the total uncertainty on the mass of the top quark. The value of 0.42 on the strength on final state radiation indicates that the simulation underestimates the strength of final state radiation. There is currently work ongoing aiming to publish the results found in this thesis in the context of an official ATLAS publication. Additionally the uncertainties can be compared with those one would obtain by using only one of the two variables. If one considers only the transverse decay length, a statistical error of Δm{sub Top}{sup stat.}=1.7 GeV and a systematic uncertainty of Δm{sub Top}{sup stat.}=7.8 GeV is obtained, dominated by the uncertainty on initial and final state radiation. The statistical uncertainty obtained by using the transverse momentum of the lepton is with Δm{sub Top}{sup stat.}=1.4 GeV a bit lower than the one obtained by the transverse decay length alone but still larger than the one of the presented measurement. The systematic uncertainty obtained is Δm{sub Top}{sup stat.}=2.7 GeV. Combining the two variables is therefore worthwhile compared with using only the transverse momentum of the lepton alone. The dominant uncertainties on the measurement are caused by imperfect knowledge of the simulation parameters, especially the choice of Monte-Carlo generator. Other large

  13. Rumen microbial growth estimation using in vitro radiophosphorous incorporation technique

    International Nuclear Information System (INIS)

    Bueno, Ives Claudio da Silva; Machado, Mariana de Carvalho; Cabral Filho, Sergio Lucio Salomon; Gobbo, Sarita Priscila; Vitti, Dorinha Miriam Silber Schmidt; Abdalla, Adibe Luiz

    2002-01-01

    Rumen microorganisms are able to transform low biological value nitrogen of feed stuff into high quality protein. To determine how much microbial protein that process forms, radiomarkers can be used. Radiophosphorous has been used to mark microbial protein, as element P is present in all rumen microorganisms (as phospholipids) and the P:N ratio of rumen biomass is quite constant. The aim of this work was to estimate microbial synthesis from feedstuff commonly used in ruminant nutrition in Brazil. Tested feeds were fresh alfalfa, raw sugarcane bagasse, rice hulls, rice meal, soybean meal, wheat meal, Tifton hay, leucaena, dehydrated citrus pulp, wet brewers' grains and cottonseed meal. 32 P-labelled phosphate solution was used as marker for microbial protein. Results showed the diversity of feeds by distinct quantities of nitrogen incorporated into microbial mass. Low nutrient availability feeds (sugarcane bagasse and rice hulls) promoted the lowest values of incorporated nitrogen. Nitrogen incorporation showed positive relationship (r=0.56; P=0.06) with the rate of degradation and negative relationship (r=-0.59; P<0.05) with fiber content of feeds. The results highlight that easier fermentable feeds (higher rates of degradation) and/or with lower fiber contents promote a more efficient microbial growth and better performance for the host animal. (author)

  14. Rumen microbial growth estimation using in vitro radiophosphorous incorporation technique

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Ives Claudio da Silva; Machado, Mariana de Carvalho; Cabral Filho, Sergio Lucio Salomon; Gobbo, Sarita Priscila; Vitti, Dorinha Miriam Silber Schmidt; Abdalla, Adibe Luiz [Centro de Energia Nuclear na Agricultura (CENA), Piracicaba, SP (Brazil)

    2002-07-01

    Rumen microorganisms are able to transform low biological value nitrogen of feed stuff into high quality protein. To determine how much microbial protein that process forms, radiomarkers can be used. Radiophosphorous has been used to mark microbial protein, as element P is present in all rumen microorganisms (as phospholipids) and the P:N ratio of rumen biomass is quite constant. The aim of this work was to estimate microbial synthesis from feedstuff commonly used in ruminant nutrition in Brazil. Tested feeds were fresh alfalfa, raw sugarcane bagasse, rice hulls, rice meal, soybean meal, wheat meal, Tifton hay, leucaena, dehydrated citrus pulp, wet brewers' grains and cottonseed meal. {sup 32} P-labelled phosphate solution was used as marker for microbial protein. Results showed the diversity of feeds by distinct quantities of nitrogen incorporated into microbial mass. Low nutrient availability feeds (sugarcane bagasse and rice hulls) promoted the lowest values of incorporated nitrogen. Nitrogen incorporation showed positive relationship (r=0.56; P=0.06) with the rate of degradation and negative relationship (r=-0.59; P<0.05) with fiber content of feeds. The results highlight that easier fermentable feeds (higher rates of degradation) and/or with lower fiber contents promote a more efficient microbial growth and better performance for the host animal. (author)

  15. Weight-for-length/height growth curves for children and adolescents in China in comparison with body mass index in prevalence estimates of malnutrition.

    Science.gov (United States)

    Zong, Xinnan; Li, Hui; Zhang, Yaqin; Wu, Huahong

    2017-05-01

    It is important to update weight-for-length/height growth curves in China and re-examine their performance in screening malnutrition. To develop weight-for-length/height growth curves for Chinese children and adolescents. A total of 94 302 children aged 0-19 years with complete sex, age, weight and length/height data were obtained from two cross-sectional large-scaled national surveys in China. Weight-for-length/height growth curves were constructed using the LMS method before and after average spermarcheal/menarcheal ages, respectively. Screening performance in prevalence estimates of wasting, overweight and obesity was compared between weight-for-height and body mass index (BMI) criteria based on a test population of 21 416 children aged 3-18. The smoothed weight-for-length percentiles and Z-scores growth curves with length 46-110 cm for both sexes and weight-for-height with height 70-180 cm for boys and 70-170 cm for girls were established. The weight-for-height and BMI-for-age had strong correlation in screening wasting, overweight and obesity in each age-sex group. There was no striking difference in prevalence estimates of wasting, overweight and obesity between two indicators except for obesity prevalence at ages 6-11. This set of smoothed weight-for-length/height growth curves may be useful in assessing nutritional status from infants to post-pubertal adolescents.

  16. Simplified estimation technique for organic contaminant transport in ground water

    Energy Technology Data Exchange (ETDEWEB)

    Piver, W T; Lindstrom, F T

    1984-05-01

    The analytical solution for one-dimensional dispersive-advective transport of a single solute in a saturated soil accompanied by adsorption onto soil surfaces and first-order reaction rate kinetics for degradation can be used to evaluate the suitability of potential sites for burial of organic chemicals. The technique can be used to the greatest advantage with organic chemicals that are present in ground waters in small amounts. The steady-state solution provides a rapid method for chemical landfill site evaluation because it contains the important variables that describe interactions between hydrodynamics and chemical transformation. With this solution, solute concentration, at a specified distance from the landfill site, is a function of the initial concentration and two dimensionless groups. In the first group, the relative weights of advective and dispersive variables are compared, and in the second group the relative weights of hydrodynamic and degradation variables are compared. The ratio of hydrodynamic to degradation variables can be rearranged and written as (a/sub L lambda)/(q/epsilon), where a/sub L/ is the dispersivity of the soil, lambda is the reaction rate constant, q is ground water flow velocity, and epsilon is the soil porosity. When this term has a value less than 0.01, the degradation process is occurring at such a slow rate relative to the hydrodynamics that it can be neglected. Under these conditions the site is unsuitable because the chemicals are unreactive, and concentrations in ground waters will change very slowly with distance away from the landfill site.

  17. Using pharyngeal teeth and chewing pads to estimate juvenile Silver Carp total length in the La Grange Reach, Illinois River

    Science.gov (United States)

    Lampo, Eli G.; Knights, Brent C.; Vallazza, Jon; Anderson, Cory A.; Rechkemmer, Will T.; Solomon, Levi E.; Casper, Andrew F.; Pendleton, Richard M.; Lamer, James T.

    2017-01-01

    The Silver Carp Hypophthalmichthys molitrix is an invasive species in the Mississippi River basin; an understanding of their vulnerability to predation as juveniles may inform control by native predators and predator enhancement (e.g., stocking). Digestion of Silver Carp prey recovered from diets makes it difficult to determine the size‐classes that are most vulnerable to predation by native fishes. The objective of this study was to determine whether the sizes of the chewing pad (CP), pharyngeal teeth (PT), and pharyngeal arch (PA)—the Silver Carp structures most often found intact in predator diets—were predictive of the TL of prey Silver Carp. During 2014 and 2015, juvenile Silver Carp (n = 136; <180 mm) were collected using 60‐Hz pulsed‐DC electrofishing and mini‐fyke nets in the La Grange reach of the Illinois River. We extracted Silver Carp CPs (n = 136 fish) and PAs with PT intact (n = 129 fish) and measured CP length (CPL) and width (CPW), eight reproducible PT landmarks (PT1L–PT4L; PT1W–PT4W), and four reproducible PA landmarks (PA1–PA4) to the nearest 0.01 µm. Using simple linear regression, we found a strong predictive relationship between measurements of CP, PT, or PA and the TL of Silver Carp. The CPL (r2 = 0.94) and CPW (r2 = 0.94) had the strongest relationships with Silver Carp TL, followed by PA1 (r2 = 0.89) and PT1L (r2 = 0.87). These strong relationships suggest that all three structures could be used in diet analyses to accurately estimate Silver Carp TL and thus further our understanding of predator–prey dynamics for this high‐risk invasive species.

  18. ESTIMATION OF YELLOWFIN TUNA PRODUCTION LANDED IN BENOA PORT WITH WEIGHT-WEIGHT, LENGTH-WEIGHT RELATIONSHIPS AND CONDITION FACTOR APPROACHES

    Directory of Open Access Journals (Sweden)

    Irwan Jatmiko

    2017-01-01

    Full Text Available Yellowfin tuna (Thunnus albacares is one of the important catch for the fishing industry in Indonesia. Length-weight relationship study is one of important tools to support fisheries management. However it could not be done to yellowfin tuna landed in Benoa port since they are in the form of gilled-gutted condition. The objectives of this study are to determine the relationship between gilled-gutted weight (GW and whole weight (WW, to calculate length weight relationship between fork length (FL and estimated whole weight (WW and to assess the relative condition factor (Kn of yellowfin tuna in Eastern Indian Ocean. Data were collected from three landing sites i.e. Malang, East Java; Benoa, Bali and Kupang, East Nusa Tenggara from January 2013 to February 2014. Linear regression analysis applied to test the significance baseline between weight-weight relationships and log transformed length weight relationship. Relative condition factor (Kn used to identify fish condition among length groups and months. The results showed a significant positive linear relationships between whole weight (WW and gilled-gutted weight (GW of T. albacares (p<0.001. There was a significant positive linier relationships between log transformed fork length and log transformed whole weight of T. albacares (p<0.001. Relative condition factor (Kn showed declining pattern along with length increase and varied among months. The findings from this study provide data for management of yellowfin tuna stock and population.

  19. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images

    NARCIS (Netherlands)

    Zweerink, A.; Allaart, C.P.; Kuijer, J.P.A.; Wu, L.; Beek, A.M.; Ven, P.M. van de; Meine, M.; Croisille, P.; Clarysse, P.; Rossum, A.C. van; Nijveldt, R.

    2017-01-01

    OBJECTIVES: Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive

  20. A technique for the radar cross-section estimation of axisymmetric plasmoid

    International Nuclear Information System (INIS)

    Naumov, N D; Petrovskiy, V P; Sasinovskiy, Yu K; Shkatov, O Yu

    2015-01-01

    A model for the radio waves backscattering from both penetrable plasma and reflecting plasma is developed. The technique proposed is based on Huygens's principle and reduces the radar cross-section estimation to numerical integrations. (paper)

  1. A Rapid Screen Technique for Estimating Nanoparticle Transport in Porous Media

    Science.gov (United States)

    Quantifying the mobility of engineered nanoparticles in hydrologic pathways from point of release to human or ecological receptors is essential for assessing environmental exposures. Column transport experiments are a widely used technique to estimate the transport parameters of ...

  2. Use of tracer technique in estimation of methane (green house gas) from ruminant

    International Nuclear Information System (INIS)

    Singh, G.P.

    1996-01-01

    Several methods developed to estimate the methane emission by ruminant livestock like feed fermentation based technique, using radioisotope as tracer, respiration chamber, etc. have been discussed. 6 refs., 3 figs

  3. A rapid technique for estimating the depth and width of a two-dimensional plate from self-potential data

    International Nuclear Information System (INIS)

    Mehanee, Salah; Smith, Paul D; Essa, Khalid S

    2011-01-01

    Rapid techniques for self-potential (SP) data interpretation are of prime importance in engineering and exploration geophysics. Parameters (e.g. depth, width) estimation of the ore bodies has also been of paramount concern in mineral prospecting. In many cases, it is useful to assume that the SP anomaly is due to an ore body of simple geometric shape and to use the data to determine its parameters. In light of this, we describe a rapid approach to determine the depth and horizontal width of a two-dimensional plate from the SP anomaly. The rationale behind the scheme proposed in this paper is that, unlike the two- (2D) and three-dimensional (3D) SP rigorous source current inversions, it does not demand a priori information about the subsurface resistivity distribution nor high computational resources. We apply the second-order moving average operator on the SP anomaly to remove the unwanted (regional) effect, represented by up to a third-order polynomial, using filters of successive window lengths. By defining a function F at a fixed window length (s) in terms of the filtered anomaly computed at two points symmetrically distributed about the origin point of the causative body, the depth (z) corresponding to each half-width (w) is estimated by solving a nonlinear equation in the form ξ(s, w, z) = 0. The estimated depths are then plotted against their corresponding half-widths on a graph representing a continuous curve for this window length. This procedure is then repeated for each available window length. The depth and half-width solution of the buried structure is read at the common intersection of these various curves. The improvement of this method over the published first-order moving average technique for SP data is demonstrated on a synthetic data set. It is then verified on noisy synthetic data, complicated structures and successfully applied to three field examples for mineral exploration and we have found that the estimated depth is in good agreement with

  4. Motion estimation of tagged cardiac magnetic resonance images using variational techniques

    Czech Academy of Sciences Publication Activity Database

    Carranza-Herrezuelo, N.; Bajo, A.; Šroubek, Filip; Santamarta, C.; Cristóbal, G.; Santos, A.; Ledesma-Carbayo, M.J.

    2010-01-01

    Roč. 34, č. 6 (2010), s. 514-522 ISSN 0895-6111 Institutional research plan: CEZ:AV0Z10750506 Keywords : medical imaging processing * motion estimation * variational techniques * tagged cardiac magnetic resonance images * optical flow Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.110, year: 2010 http://library.utia.cas.cz/separaty/2010/ZOI/sroubek- motion estimation of tagged cardiac magnetic resonance images using variational techniques.pdf

  5. Estimation of genetic variability and selection response for clutch length in dwarf brown-egg layers carrying or not the naked neck gene

    Directory of Open Access Journals (Sweden)

    Tixier-Boichard Michèle

    2003-03-01

    Full Text Available Abstract In order to investigate the possibility of using the dwarf gene for egg production, two dwarf brown-egg laying lines were selected for 16 generations on average clutch length; one line (L1 was normally feathered and the other (L2 was homozygous for the naked neck gene NA. A control line from the same base population, dwarf and segregating for the NA gene, was maintained during the selection experiment under random mating. The average clutch length was normalized using a Box-Cox transformation. Genetic variability and selection response were estimated either with the mixed model methodology, or with the classical methods for calculating genetic gain, as the deviation from the control line, and the realized heritability, as the ratio of the selection response on cumulative selection differentials. Heritability of average clutch length was estimated to be 0.42 ± 0.02, with a multiple trait animal model, whereas the estimates of the realized heritability were lower, being 0.28 and 0.22 in lines L1 and L2, respectively. REML estimates of heritability were found to decline with generations of selection, suggesting a departure from the infinitesimal model, either because a limited number of genes was involved, or their frequencies were changed. The yearly genetic gains in average clutch length, after normalization, were estimated to be 0.37 ± 0.02 and 0.33 ± 0.04 with the classical methods, 0.46 ± 0.02 and 0.43 ± 0.01 with animal model methodology, for lines L1 and L2 respectively, which represented about 30% of the genetic standard deviation on the transformed scale. Selection response appeared to be faster in line L2, homozygous for the NA gene, but the final cumulated selection response for clutch length was not different between the L1 and L2 lines at generation 16.

  6. Simple robust technique using time delay estimation for the control and synchronization of Lorenz systems

    International Nuclear Information System (INIS)

    Jin, Maolin; Chang, Pyung Hun

    2009-01-01

    This work presents two simple and robust techniques based on time delay estimation for the respective control and synchronization of chaos systems. First, one of these techniques is applied to the control of a chaotic Lorenz system with both matched and mismatched uncertainties. The nonlinearities in the Lorenz system is cancelled by time delay estimation and desired error dynamics is inserted. Second, the other technique is applied to the synchronization of the Lue system and the Lorenz system with uncertainties. The synchronization input consists of three elements that have transparent and clear meanings. Since time delay estimation enables a very effective and efficient cancellation of disturbances and nonlinearities, the techniques turn out to be simple and robust. Numerical simulation results show fast, accurate and robust performance of the proposed techniques, thereby demonstrating their effectiveness for the control and synchronization of Lorenz systems.

  7. Development and comparision of techniques for estimating design basis flood flows for nuclear power plants

    International Nuclear Information System (INIS)

    1980-05-01

    Estimation of the design basis flood for Nuclear Power Plants can be carried out using either deterministic or stochastic techniques. Stochastic techniques, while widely used for the solution of a variety of hydrological and other problems, have not been used to date (1980) in connection with the estimation of design basis flood for NPP siting. This study compares the two techniques against one specific river site (Galt on the Grand River, Ontario). The study concludes that both techniques lead to comparable results , but that stochastic techniques have the advantage of extracting maximum information from available data and presenting the results (flood flow) as a continuous function of probability together with estimation of confidence limits. (author)

  8. Evaluation of small area crop estimation techniques using LANDSAT- and ground-derived data. [South Dakota

    Science.gov (United States)

    Amis, M. L.; Martin, M. V.; Mcguire, W. G.; Shen, S. S. (Principal Investigator)

    1982-01-01

    Studies completed in fiscal year 1981 in support of the clustering/classification and preprocessing activities of the Domestic Crops and Land Cover project. The theme throughout the study was the improvement of subanalysis district (usually county level) crop hectarage estimates, as reflected in the following three objectives: (1) to evaluate the current U.S. Department of Agriculture Statistical Reporting Service regression approach to crop area estimation as applied to the problem of obtaining subanalysis district estimates; (2) to develop and test alternative approaches to subanalysis district estimation; and (3) to develop and test preprocessing techniques for use in improving subanalysis district estimates.

  9. Two techniques for mapping and area estimation of small grains in California using Landsat digital data

    Science.gov (United States)

    Sheffner, E. J.; Hlavka, C. A.; Bauer, E. M.

    1984-01-01

    Two techniques have been developed for the mapping and area estimation of small grains in California from Landsat digital data. The two techniques are Band Ratio Thresholding, a semi-automated version of a manual procedure, and LCLS, a layered classification technique which can be fully automated and is based on established clustering and classification technology. Preliminary evaluation results indicate that the two techniques have potential for providing map products which can be incorporated into existing inventory procedures and automated alternatives to traditional inventory techniques and those which currently employ Landsat imagery.

  10. Optimizing Penile Length in Patients Undergoing Partial Penectomy for Penile Cancer: Novel Application of the Ventral Phalloplasty Oncoplastic Technique

    Directory of Open Access Journals (Sweden)

    Jared J. Wallen

    2014-10-01

    Full Text Available The ventral phalloplasty (VP has been well described in modern day penile prosthesis surgery. The main objectives of this maneuver are to increase perceived length and patient satisfaction and to counteract the natural 1-2 cm average loss in length when performing implantation of an inflatable penile prosthesis. Similarly, this video represents a new adaptation for partial penectomy patients. One can only hope that the addition of the VP for partial penectomy patients with good erectile function will increase their quality of life. The patient in this video is a 56-year-old male who presented with a 4.0x3.5x1.0 cm, pathologic stage T2 squamous cell carcinoma of the glans penis. After partial penectomy with VP and inguinal lymph node dissection, pathological specimen revealed negative margins, 3/5 right superficial nodes and 1/5 left superficial nodes positive for malignancy. The patient has been recommended post-operative systemic chemotherapy (with external beam radiotherapy based on the multiple node positivity and presence of extranodal extension. The patient’s pre-operative penile length was 9.5 cm, and after partial penectomy with VP, penile length is 7 cm.

  11. Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation.

    Science.gov (United States)

    Lee, Soojeong; Chang, Joon-Hyuk

    2017-11-01

    This paper proposes a deep learning based ensemble regression estimator with asymptotic techniques, and offers a method that can decrease uncertainty for oscillometric blood pressure (BP) measurements using the bootstrap and Monte-Carlo approach. While the former is used to estimate SBP and DBP, the latter attempts to determine confidence intervals (CIs) for SBP and DBP based on oscillometric BP measurements. This work originally employs deep belief networks (DBN)-deep neural networks (DNN) to effectively estimate BPs based on oscillometric measurements. However, there are some inherent problems with these methods. First, it is not easy to determine the best DBN-DNN estimator, and worthy information might be omitted when selecting one DBN-DNN estimator and discarding the others. Additionally, our input feature vectors, obtained from only five measurements per subject, represent a very small sample size; this is a critical weakness when using the DBN-DNN technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To address these problems, an ensemble with an asymptotic approach (based on combining the bootstrap with the DBN-DNN technique) is utilized to generate the pseudo features needed to estimate the SBP and DBP. In the first stage, the bootstrap-aggregation technique is used to create ensemble parameters. Afterward, the AdaBoost approach is employed for the second-stage SBP and DBP estimation. We then use the bootstrap and Monte-Carlo techniques in order to determine the CIs based on the target BP estimated using the DBN-DNN ensemble regression estimator with the asymptotic technique in the third stage. The proposed method can mitigate the estimation uncertainty such as large the standard deviation of error (SDE) on comparing the proposed DBN-DNN ensemble regression estimator with the DBN-DNN single regression estimator, we identify that the SDEs of the SBP and DBP are reduced by 0.58 and 0.57  mmHg, respectively. These

  12. Estimation of Extra Length of Stay Attributable to Hospital-Acquired Infections in Adult ICUs Using a Time-Dependent Multistate Model.

    Science.gov (United States)

    Ohannessian, Robin; Gustin, Marie-Paule; Bénet, Thomas; Gerbier-Colomban, Solweig; Girard, Raphaele; Argaud, Laurent; Rimmelé, Thomas; Guerin, Claude; Bohé, Julien; Piriou, Vincent; Vanhems, Philippe

    2018-04-10

    The objective of the study was to estimate the length of stay of patients with hospital-acquired infections hospitalized in ICUs using a multistate model. Active prospective surveillance of hospital-acquired infection from January 1, 1995, to December 31, 2012. Twelve ICUs at the University of Lyon hospital (France). Adult patients age greater than or equal to 18 years old and hospitalized greater than or equal to 2 days were included in the surveillance. All hospital-acquired infections (pneumonia, bacteremia, and urinary tract infection) occurring during ICU stay were collected. None. The competitive risks of in-hospital death, transfer, or discharge were considered in estimating the change in length of stay due to infection(s), using a multistate model, time of infection onset. Thirty-three thousand four-hundred forty-nine patients were involved, with an overall hospital-acquired infection attack rate of 15.5% (n = 5,176). Mean length of stay was 27.4 (± 18.3) days in patients with hospital-acquired infection and 7.3 (± 7.6) days in patients without hospital-acquired infection. A multistate model-estimated mean found an increase in length of stay by 5.0 days (95% CI, 4.6-5.4 d). The extra length of stay increased with the number of infected site and was higher for patients discharged alive from ICU. No increased length of stay was found for patients presenting late-onset hospital-acquired infection, more than the 25th day after admission. An increase length of stay of 5 days attributable to hospital-acquired infection in the ICU was estimated using a multistate model in a prospective surveillance study in France. The dose-response relationship between the number of hospitalacquired infection and length of stay and the impact of early-stage hospital-acquired infection may strengthen attention for clinicians to focus interventions on early preventions of hospital-acquired infection in ICU.

  13. Lower limb muscle volume estimation from maximum cross-sectional area and muscle length in cerebral palsy and typically developing individuals.

    Science.gov (United States)

    Vanmechelen, Inti M; Shortland, Adam P; Noble, Jonathan J

    2018-01-01

    Deficits in muscle volume may be a significant contributor to physical disability in young people with cerebral palsy. However, 3D measurements of muscle volume using MRI or 3D ultrasound may be difficult to make routinely in the clinic. We wished to establish whether accurate estimates of muscle volume could be made from a combination of anatomical cross-sectional area and length measurements in samples of typically developing young people and young people with bilateral cerebral palsy. Lower limb MRI scans were obtained from the lower limbs of 21 individuals with cerebral palsy (14.7±3years, 17 male) and 23 typically developing individuals (16.8±3.3years, 16 male). The volume, length and anatomical cross-sectional area were estimated from six muscles of the left lower limb. Analysis of Covariance demonstrated that the relationship between the length*cross-sectional area and volume was not significantly different depending on the subject group. Linear regression analysis demonstrated that the product of anatomical cross-sectional area and length bore a strong and significant relationship to the measured muscle volume (R 2 values between 0.955 and 0.988) with low standard error of the estimates of 4.8 to 8.9%. This study demonstrates that muscle volume may be estimated accurately in typically developing individuals and individuals with cerebral palsy by a combination of anatomical cross-sectional area and muscle length. 2D ultrasound may be a convenient method of making these measurements routinely in the clinic. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Novel Application of Density Estimation Techniques in Muon Ionization Cooling Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Mohayai, Tanaz Angelina [IIT, Chicago; Snopok, Pavel [IIT, Chicago; Neuffer, David [Fermilab; Rogers, Chris [Rutherford

    2017-10-12

    The international Muon Ionization Cooling Experiment (MICE) aims to demonstrate muon beam ionization cooling for the first time and constitutes a key part of the R&D towards a future neutrino factory or muon collider. Beam cooling reduces the size of the phase space volume occupied by the beam. Non-parametric density estimation techniques allow very precise calculation of the muon beam phase-space density and its increase as a result of cooling. These density estimation techniques are investigated in this paper and applied in order to estimate the reduction in muon beam size in MICE under various conditions.

  15. Estimation of Length-Weight Relationship and Proximate Composition of Catfish (Clarias gariepinus Burchell, 1822 from Two Different Culture Facilities

    Directory of Open Access Journals (Sweden)

    Olaniyi Alaba Olopade

    2015-06-01

    Full Text Available This study was carried out to determine and compare the proximate composition and length weight relationship of C. gariepinus from two culture systems (earthen and concrete ponds. The fish samples were collected from three fish farms with same cultural condition in different areas of Obio-akpor Local Government Area of Rivers State, Nigeria. Result on the length- weight relationship revealed that C.gariepinus reared in concrete tank had a total length of 15.50- 49.00cm with a mean of 32.71cm and weight of 150-625g, while total length of C. gariepinus reared in the earthen pond ranged from 19.90-58.0cm with a mean of 39.8cm and weight of 195-825g. The T- test shows that the total length of earthen pond were significantly higher than the concrete tank and the weight in the earthen pond was significantly higher than the concrete tank. Parameters of proximate composition analysed were moisture, protein, lipid, carbohydrate, ash and fiber from the fish flesh. Protein content showed a significantly higher in the earthen ponds than the concrete tanks. Ash contents varied from 1.5±1.66-7.4±0.67% in the concrete tanks and were significantly higher than the earthen ponds which ranged from 3.1±0.94-4.5±2.11%. Lipid was significantly higher in earthen ponds than concrete tanks. Generally, the two culture systems have a significant influence on length–weight relationship and nutritional value of C. gariepinus. However, C. gariepinus reared in concrete tank had a heavier body weight than earthen pond and also C. gariepinus reared in earthen pond had highest nutritive values than the concrete tank.

  16. Comparison of deterministic and stochastic techniques for estimation of design basis floods for nuclear power plants

    International Nuclear Information System (INIS)

    Solomon, S.I.; Harvey, K.D.

    1982-12-01

    The IAEA Safety Guide 50-SG-S10A recommends that design basis floods be estimated by deterministic techniques using probable maximum precipitation and a rainfall runoff model to evaluate the corresponding flood. The Guide indicates that stochastic techniques are also acceptable in which case floods of very low probability have to be estimated. The paper compares the results of applying the two techniques in two river basins at a number of locations and concludes that the uncertainty of the results of both techniques is of the same order of magnitude. However, the use of the unit hydrograph as the rainfall runoff model may lead in some cases to nonconservative estimates. A distributed non-linear rainfall runoff model leads to estimates of probable maximum flood flows which are very close to values of flows having a 10 6 - 10 7 years return interval estimated using a conservative and relatively simple stochastic technique. Recommendations on the practical application of Safety Guide 50-SG-10A are made and the extension of the stochastic technique to ungauged sites and other design parameters is discussed

  17. Comparison of deterministic and stochastic techniques for estimation of design basis floods for nuclear power plants

    International Nuclear Information System (INIS)

    Solomon, S.I.; Harvey, K.D.; Asmis, G.J.K.

    1983-01-01

    The IAEA Safety Guide 50-SG-S10A recommends that design basis floods be estimated by deterministic techniques using probable maximum precipitation and a rainfall runoff model to evaluate the corresponding flood. The Guide indicates that stochastic techniques are also acceptable in which case floods of very low probability have to be estimated. The paper compares the results of applying the two techniques in two river basins at a number of locations and concludes that the uncertainty of the results of both techniques is of the same order of magnitude. However, the use of the unit hydrograph as the rain fall runoff model may lead in some cases to non-conservative estimates. A distributed non-linear rainfall runoff model leads to estimates of probable maximum flood flows which are very close to values of flows having a 10 6 to 10 7 years return interval estimated using a conservative and relatively simple stochastic technique. Recommendations on the practical application of Safety Guide 50-SG-10A are made and the extension of the stochastic technique to ungauged sites and other design parameters is discussed

  18. The importance of the chosen technique to estimate diffuse solar radiation by means of regression

    Energy Technology Data Exchange (ETDEWEB)

    Arslan, Talha; Altyn Yavuz, Arzu [Department of Statistics. Science and Literature Faculty. Eskisehir Osmangazi University (Turkey)], email: mtarslan@ogu.edu.tr, email: aaltin@ogu.edu.tr; Acikkalp, Emin [Department of Mechanical and Manufacturing Engineering. Engineering Faculty. Bilecik University (Turkey)], email: acikkalp@gmail.com

    2011-07-01

    The Ordinary Least Squares (OLS) method is one of the most frequently used for estimation of diffuse solar radiation. The data set must provide certain assumptions for the OLS method to work. The most important is that the regression equation offered by OLS error terms must fit within the normal distribution. Utilizing an alternative robust estimator to get parameter estimations is highly effective in solving problems where there is a lack of normal distribution due to the presence of outliers or some other factor. The purpose of this study is to investigate the value of the chosen technique for the estimation of diffuse radiation. This study described alternative robust methods frequently used in applications and compared them with the OLS method. Making a comparison of the data set analysis of the OLS and that of the M Regression (Huber, Andrews and Tukey) techniques, it was study found that robust regression techniques are preferable to OLS because of the smoother explanation values.

  19. A survey on OFDM channel estimation techniques based on denoising strategies

    Directory of Open Access Journals (Sweden)

    Pallaviram Sure

    2017-04-01

    Full Text Available Channel estimation forms the heart of any orthogonal frequency division multiplexing (OFDM based wireless communication receiver. Frequency domain pilot aided channel estimation techniques are either least squares (LS based or minimum mean square error (MMSE based. LS based techniques are computationally less complex. Unlike MMSE ones, they do not require a priori knowledge of channel statistics (KCS. However, the mean square error (MSE performance of the channel estimator incorporating MMSE based techniques is better compared to that obtained with the incorporation of LS based techniques. To enhance the MSE performance using LS based techniques, a variety of denoising strategies have been developed in the literature, which are applied on the LS estimated channel impulse response (CIR. The advantage of denoising threshold based LS techniques is that, they do not require KCS but still render near optimal MMSE performance similar to MMSE based techniques. In this paper, a detailed survey on various existing denoising strategies, with a comparative discussion of these strategies is presented.

  20. Estimating the hemodynamic influence of variable main body-to-iliac limb length ratios in aortic endografts.

    Science.gov (United States)

    Georgakarakos, Efstratios; Xenakis, Antonios; Georgiadis, George S

    2018-02-01

    We conducted a computational study to assess the hemodynamic impact of variant main body-to-iliac limb length (L1/L2) ratios on certain hemodynamic parameters acting on the endograft (EG) either on the normal bifurcated (Bif) or the cross-limb (Cx) fashion. A customary bifurcated 3D model was computationally created and meshed using the commercially available ANSYS ICEM (Ansys Inc., Canonsburg, PA, USA) software. The total length of the EG, was kept constant, while the L1/L2 ratio ranged from 0.3 to 1.5 in the Bif and Cx reconstructed EG models. The compliance of the graft was modeled using a Fluid Structure Interaction method. Important hemodynamic parameters such as pressure drop along EG, wall shear stress (WSS) and helicity were calculated. The greatest pressure decrease across EG was calculated in the peak systolic phase. With increasing L1/L2 it was found that the Pressure Drop was increasing for the Cx configuration, while decreasing for the Bif. The greatest helicity (4.1 m/s2) was seen in peak systole of Cx with ratio of 1.5 whereas its greatest value (2 m/s2) was met in peak systole in the Bif with the shortest L1/L2 ratio (0.3). Similarly, the maximum WSS value was highest (2.74Pa) in the peak systole for the 1.5 L1/L2 of the Cx configuration, while the maximum WSS value equaled 2 Pa for all length ratios of the Bif modification (with the WSS found for L1/L2=0.3 being marginally higher). There was greater discrepancy in the WSS values for all L1/L2 ratios of the Cx bifurcation compared to Bif. Different L1/L2 rations are shown to have an impact on the pressure distribution along the entire EG while the length ratio predisposing to highest helicity or WSS values is also determined by the iliac limbs pattern of the EG. Since current custom-made EG solutions can reproduce variability in main-body/iliac limbs length ratios, further computational as well as clinical research is warranted to delineate and predict the hemodynamic and clinical effect of variable

  1. Direction of Arrival Estimation Accuracy Enhancement via Using Displacement Invariance Technique

    Directory of Open Access Journals (Sweden)

    Youssef Fayad

    2015-01-01

    Full Text Available A new algorithm for improving Direction of Arrival Estimation (DOAE accuracy has been carried out. Two contributions are introduced. First, Doppler frequency shift that resulted from the target movement is estimated using the displacement invariance technique (DIT. Second, the effect of Doppler frequency is modeled and incorporated into ESPRIT algorithm in order to increase the estimation accuracy. It is worth mentioning that the subspace approach has been employed into ESPRIT and DIT methods to reduce the computational complexity and the model’s nonlinearity effect. The DOAE accuracy has been verified by closed-form Cramér-Rao bound (CRB. The simulation results of the proposed algorithm are better than those of the previous estimation techniques leading to the estimator performance enhancement.

  2. Estimation of the Coefficient of Restitution of Rocking Systems by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Demosthenous, Milton; Manos, George C.

    1994-01-01

    The aim of this paper is to investigate the possibility of estimating an average damping parameter for a rocking system due to impact, the so-called coefficient of restitution, from the random response, i.e. when the loads are random and unknown, and the response is measured. The objective...... is to obtain an estimate of the free rocking response from the measured random response using the Random Decrement (RDD) Technique, and then estimate the coefficient of restitution from this free response estimate. In the paper this approach is investigated by simulating the response of a single degree...

  3. Third molar development: evaluation of nine tooth development registration techniques for age estimations.

    Science.gov (United States)

    Thevissen, Patrick W; Fieuws, Steffen; Willems, Guy

    2013-03-01

    Multiple third molar development registration techniques exist. Therefore the aim of this study was to detect which third molar development registration technique was most promising to use as a tool for subadult age estimation. On a collection of 1199 panoramic radiographs the development of all present third molars was registered following nine different registration techniques [Gleiser, Hunt (GH); Haavikko (HV); Demirjian (DM); Raungpaka (RA); Gustafson, Koch (GK); Harris, Nortje (HN); Kullman (KU); Moorrees (MO); Cameriere (CA)]. Regression models with age as response and the third molar registration as predictor were developed for each registration technique separately. The MO technique disclosed highest R(2) (F 51%, M 45%) and lowest root mean squared error (F 3.42 years; M 3.67 years) values, but differences with other techniques were small in magnitude. The amount of stages utilized in the explored staging techniques slightly influenced the age predictions. © 2013 American Academy of Forensic Sciences.

  4. Estimating length of stay in publicly-funded residential and nursing care homes: a retrospective analysis using linked administrative data sets.

    Science.gov (United States)

    Steventon, Adam; Roberts, Adam

    2012-10-31

    Information about how long people stay in care homes is needed to plan services, as length of stay is a determinant of future demand for care. As length of stay is proportional to cost, estimates are also needed to inform analysis of the long-term cost effectiveness of interventions aimed at preventing admissions to care homes. But estimates are rarely available due to the cost of repeatedly surveying individuals. We used administrative data from three local authorities in England to estimate the length of publicly-funded care homes stays beginning in 2005 and 2006. Stays were classified into nursing home, permanent residential and temporary residential. We aggregated successive placements in different care home providers and, by linking to health data, across periods in hospital. The largest group of stays (38.9%) were those intended to be temporary, such as for rehabilitation, and typically lasted 4 weeks. For people admitted to permanent residential care, median length of stay was 17.9 months. Women stayed longer than men, while stays were shorter if preceded by other forms of social care. There was significant variation in length of stay between the three local authorities. The typical person admitted to a permanent residential care home will cost a local authority over £38,000, less payments due from individuals under the means test. These figures are not apparent from existing data sets. The large cost of care home placements suggests significant scope for preventive approaches. The administrative data revealed complexity in patterns of service use, which should be further explored as it may challenge the assumptions that are often made.

  5. Estimating length of stay in publicly-funded residential and nursing care homes: a retrospective analysis using linked administrative data sets

    Directory of Open Access Journals (Sweden)

    Steventon Adam

    2012-10-01

    Full Text Available Abstract Background Information about how long people stay in care homes is needed to plan services, as length of stay is a determinant of future demand for care. As length of stay is proportional to cost, estimates are also needed to inform analysis of the long-term cost effectiveness of interventions aimed at preventing admissions to care homes. But estimates are rarely available due to the cost of repeatedly surveying individuals. Methods We used administrative data from three local authorities in England to estimate the length of publicly-funded care homes stays beginning in 2005 and 2006. Stays were classified into nursing home, permanent residential and temporary residential. We aggregated successive placements in different care home providers and, by linking to health data, across periods in hospital. Results The largest group of stays (38.9% were those intended to be temporary, such as for rehabilitation, and typically lasted 4 weeks. For people admitted to permanent residential care, median length of stay was 17.9 months. Women stayed longer than men, while stays were shorter if preceded by other forms of social care. There was significant variation in length of stay between the three local authorities. The typical person admitted to a permanent residential care home will cost a local authority over £38,000, less payments due from individuals under the means test. Conclusions These figures are not apparent from existing data sets. The large cost of care home placements suggests significant scope for preventive approaches. The administrative data revealed complexity in patterns of service use, which should be further explored as it may challenge the assumptions that are often made.

  6. Parameter estimation techniques and uncertainty in ground water flow model predictions

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Davis, P.A.

    1990-01-01

    Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs

  7. Estimation of stature and length of limb segments in children and adolescents from whole-body dual-energy X-ray absorptiometry scans

    International Nuclear Information System (INIS)

    Abrahamyan, Davit O.; Gazarian, Aram; Braillon, Pierre M.

    2008-01-01

    Anthropometric standards vary among different populations, and renewal of these reference values is necessary. To produce formulae for the assessment of limb segment lengths. Whole-body dual-energy X-ray absorptiometry scans of 413 Caucasian children and adolescents (170 boys, 243 girls) aged from 6 to 18 years were retrospectively analysed. Body height and the lengths of four long bones (humerus, radius, femur and tibia) were measured. The validity (concurrent validity) and reproducibility (intraobserver reliability) of the measurement technique were tested. High linear correlations (r > 0.9) were found between the mentioned five longitudinal measures. Corresponding linear regression equations for the most important relationships were derived. The tests of validity and reproducibility revealed a good degree of precision of the applied technique. The reference formulae obtained from the analysis of whole-body DEXA scans will be useful for anthropologists, and forensic and nutrition specialists, as well as for prosthetists and paediatric orthopaedic surgeons. (orig.)

  8. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images

    International Nuclear Information System (INIS)

    Zweerink, Alwin; Allaart, Cornelis P.; Wu, LiNa; Beek, Aernout M.; Rossum, Albert C. van; Nijveldt, Robin; Kuijer, Joost P.A.; Ven, Peter M. van de; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick

    2017-01-01

    Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. (orig.)

  9. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images

    Energy Technology Data Exchange (ETDEWEB)

    Zweerink, Alwin; Allaart, Cornelis P.; Wu, LiNa; Beek, Aernout M.; Rossum, Albert C. van; Nijveldt, Robin [VU University Medical Center, Department of Cardiology, and Institute for Cardiovascular Research (ICaR-VU), Amsterdam (Netherlands); Kuijer, Joost P.A. [VU University Medical Center, Department of Physics and Medical Technology, Amsterdam (Netherlands); Ven, Peter M. van de [VU University Medical Center, Department of Epidemiology and Biostatistics, Amsterdam (Netherlands); Meine, Mathias [University Medical Center, Department of Cardiology, Utrecht (Netherlands); Croisille, Pierre; Clarysse, Patrick [Univ Lyon, UJM-Saint-Etienne, INSA, CNRS UMR 5520, INSERM U1206, CREATIS, Saint-Etienne (France)

    2017-12-15

    Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. (orig.)

  10. Estimation of genetic variability and heritability of wheat agronomic traits resulted from some gamma rays irradiation techniques

    International Nuclear Information System (INIS)

    Wijaya Murti Indriatama; Trikoesoemaningtyas; Syarifah Iis Aisyah; Soeranto Human

    2016-01-01

    Gamma irradiation techniques have significant effect on frequency and spectrum of macro-mutation but the study of its effect on micro-mutation that related to genetic variability on mutated population is very limited. The aim of this research was to study the effect of gamma irradiation techniques on genetic variability and heritability of wheat agronomic characters at M2 generation. This research was conducted from July to November 2014, at Cibadak experimental station, Indonesian Center for Agricultural Biotechnology and Genetic Resources Research and Development, Ministry of Agriculture. Three introduced wheat breeding lines (F-44, Kiran-95 & WL-711) were treated by 3 gamma irradiation techniques (acute, fractionated and intermittent). M1 generation of combination treatments were planted and harvested its spike individually per plants. As M2 generation, seeds of 75 M1 spike were planted at the field with one row one spike method and evaluated on the agronomic characters and its genetic components. The used of gamma irradiation techniques decreased mean but increased range values of agronomic traits in M2 populations. Fractionated irradiation induced higher mean and wider range on spike length and number of spike let per spike than other irradiation techniques. Fractionated and intermittent irradiation resulted greater variability of grain weight per plant than acute irradiation. The number of tillers, spike weight, grain weight per spike and grain weight per plant on M2 population resulted from induction of three gamma irradiation techniques have high estimated heritability and broad sense of genetic variability coefficient values. The three gamma irradiation techniques increased genetic variability of agronomic traits on M2 populations, except plant height. (author)

  11. Estimation of radiative forcing and chore length of shallow convective clouds (SCC) based on broadband pyranometer measurement network

    Science.gov (United States)

    Shi, H.

    2017-12-01

    We presented a method to identify and calculate cloud radiative forcing (CRF) and horizontal chore length (L) of shallow convective clouds (SCC) using a network of 9 broadband pyranometers. The analyzing data was collected from the SCC campaign during two years summers (2015 2016) at Baiqi site over Inner Mongolia grassland. The network of pyranometers was operated across a spatial domain covering 42.16-42.30° N and 114.83-114.98° E. The SCC detection method was verified by observer reports and cameras, which showed that the detection method and human observations were in agreement about 75 %. The differences between the SCC detection method and human observations can be responsible for following factors: 1) small or dissipating clouds can be neglected for the value of 1 min of temporal resolution of pyranometer; 2) human observation recorded weather conditions four times every day; 3) SCC was indistinguishable from coexistence of SCC and Cirrus (Ci); 4) the SCC detection method is weighted toward clouds crossing the sun's path, while the human observer can view clouds over the entire sky. The deviation of L can be attributed to two factors: 1) the accuracy of wind speed at height of SCC and the ratio of horizontal and vertical length play a key role in determine values of L; 2) the effect of variance of solar zenith angle can be negligible. The downwelling shortwave CRF of SCC was -134.1 Wm-2. The average value of L of SCC was 1129 m. Besides, the distribution of normalized cloud chore length agreed well with power-law fit.

  12. A new slit lamp-based technique for anterior chamber angle estimation.

    Science.gov (United States)

    Gispets, Joan; Cardona, Genís; Tomàs, Núria; Fusté, Cèlia; Binns, Alison; Fortes, Miguel A

    2014-06-01

    To design and test a new noninvasive method for anterior chamber angle (ACA) estimation based on the slit lamp that is accessible to all eye-care professionals. A new technique (slit lamp anterior chamber estimation [SLACE]) that aims to overcome some of the limitations of the van Herick procedure was designed. The technique, which only requires a slit lamp, was applied to estimate the ACA of 50 participants (100 eyes) using two different slit lamp models, and results were compared with gonioscopy as the clinical standard. The Spearman nonparametric correlation between ACA values as determined by gonioscopy and SLACE were 0.81 (p gonioscopy (Spaeth classification). The SLACE technique, when compared with gonioscopy, displayed good accuracy in the detection of narrow angles, and it may be useful for eye-care clinicians without access to expensive alternative equipment or those who cannot perform gonioscopy because of legal constraints regarding the use of diagnostic drugs.

  13. Calculational techniques for estimating population doses from radioactivity in natural gas from nuclearly stimulated wells

    International Nuclear Information System (INIS)

    Barton, C.J.; Moore, R.E.; Rohwer, P.S.; Kaye, S.V.

    1975-01-01

    Techniques for estimating radiation doses from exposure to combustion products of natural gas obtained from wells created by use of nuclear explosives were first developed in the Gasbuggy Project. These techniques were refined and extended by development of a number of computer codes in studies related to the Rulison Project, the second in the series of joint government-industry efforts to demonstrate the feasibility of increasing natural gas production from low-permeability rock formations by use of nuclear explosives. These techniques are described and dose estimates that illustrate their use are given. These dose estimation studies have been primarily theoretical, but we have tried to make our hypothetical exposure conditions correspond as closely as possible with conditions that could exist if nuclearly stimulated natural gas is used commercially. (author)

  14. A concise account of techniques available for shipboard sea state estimation

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2017-01-01

    This article gives a review of techniques applied to make sea state estimation on the basis of measured responses on a ship. The general concept of the procedures is similar to that of a classical wave buoy, which exploits a linear assumption between waves and the associated motions. In the frequ......This article gives a review of techniques applied to make sea state estimation on the basis of measured responses on a ship. The general concept of the procedures is similar to that of a classical wave buoy, which exploits a linear assumption between waves and the associated motions...

  15. Water temperature forecasting and estimation using fourier series and communication theory techniques

    International Nuclear Information System (INIS)

    Long, L.L.

    1976-01-01

    Fourier series and statistical communication theory techniques are utilized in the estimation of river water temperature increases caused by external thermal inputs. An example estimate assuming a constant thermal input is demonstrated. A regression fit of the Fourier series approximation of temperature is then used to forecast daily average water temperatures. Also, a 60-day prediction of daily average water temperature is made with the aid of the Fourier regression fit by using significant Fourier components

  16. Cost Engineering Techniques and Their Applicability for Cost Estimation of Organic Rankine Cycle Systems

    Directory of Open Access Journals (Sweden)

    Sanne Lemmens

    2016-06-01

    Full Text Available The potential of organic Rankine cycle (ORC systems is acknowledged by both considerable research and development efforts and an increasing number of applications. Most research aims at improving ORC systems through technical performance optimization of various cycle architectures and working fluids. The assessment and optimization of technical feasibility is at the core of ORC development. Nonetheless, economic feasibility is often decisive when it comes down to considering practical instalments, and therefore an increasing number of publications include an estimate of the costs of the designed ORC system. Various methods are used to estimate ORC costs but the resulting values are rarely discussed with respect to accuracy and validity. The aim of this paper is to provide insight into the methods used to estimate these costs and open the discussion about the interpretation of these results. A review of cost engineering practices shows there has been a long tradition of industrial cost estimation. Several techniques have been developed, but the expected accuracy range of the best techniques used in research varies between 10% and 30%. The quality of the estimates could be improved by establishing up-to-date correlations for the ORC industry in particular. Secondly, the rapidly growing ORC cost literature is briefly reviewed. A graph summarizing the estimated ORC investment costs displays a pattern of decreasing costs for increasing power output. Knowledge on the actual costs of real ORC modules and projects remains scarce. Finally, the investment costs of a known heat recovery ORC system are discussed and the methodologies and accuracies of several approaches are demonstrated using this case as benchmark. The best results are obtained with factorial estimation techniques such as the module costing technique, but the accuracies may diverge by up to +30%. Development of correlations and multiplication factors for ORC technology in particular is

  17. Switching EKF technique for rotor and stator resistance estimation in speed sensorless control of IMs

    International Nuclear Information System (INIS)

    Barut, Murat; Bogosyan, Seta; Gokasan, Metin

    2007-01-01

    High performance speed sensorless control of induction motors (IMs) calls for estimation and control schemes that offer solutions to parameter uncertainties as well as to difficulties involved with accurate flux/velocity estimation at very low and zero speed. In this study, a new EKF based estimation algorithm is proposed for the solution of both problems and is applied in combination with speed sensorless direct vector control (DVC). The technique is based on the consecutive execution of two EKF algorithms, by switching from one algorithm to another at every n sampling periods. The number of sampling periods, n, is determined based on the desired system performance. The switching EKF approach, thus applied, provides an accurate estimation of an increased number of parameters than would be possible with a single EKF algorithm. The simultaneous and accurate estimation of rotor, R r ' and stator, R s resistances, both in the transient and steady state, is an important challenge in speed sensorless IM control and reported studies achieving satisfactory results are few, if any. With the proposed technique in this study, the sensorless estimation of R r ' and R s is achieved in transient and steady state and in both high and low speed operation while also estimating the unknown load torque, velocity, flux and current components. The performance demonstrated by the simulation results at zero speed, as well as at low and high speed operation is very promising when compared with individual EKF algorithms performing either R r ' or R s estimation or with the few other approaches taken in past studies, which require either signal injection and/or a change of algorithms based on the speed range. The results also motivate utilization of the technique for multiple parameter estimation in a variety of control methods

  18. The Impact of the Processing Batch Length in GNSS Data Analysis on the Estimates of Earth Rotation Parameters with Daily and Subdaily Time Resolution

    Science.gov (United States)

    Meindl, M.; Dach, R.; Thaller, D.; Schaer, S.; Beutler, G.; Jaeggi, A.

    2012-04-01

    Microwave observations from GNSS are traditionally analyzed in the post-processing mode using (solar) daily data batches. The 24-hour session length differs by only about four minutes from two revolution periods of a GPS satellite (corresponding to one sidereal day). The deep 2:1 resonance of the GPS revolution period with the length of the sidereal day may cause systematic effects in parameter estimates and spurious periodic signals in the resulting parameter time series. The selection of other (than daily) session lengths may help to identify systematic effects and to study their impact on GNSS-derived products. Such investigations are of great interest in a combined multi-GNSS analysis because of substantial differences in the satellites' revolution periods. Three years (2008-2010) of data from a global network of about 90 combined GPS/GLONASS receivers have been analyzed. Four different session lengths were used, namely the traditional 24 hours (UTC), two revolutions of a GLONASS satellite (16/17 sidereal days), two revolutions of a GPS satellite (one sidereal day), and a session length of 18/17 sidereal days, which does not correspond to either two GPS or two GLONASS revolution periods. GPS-only, GLONASS-only, and GPS/GLONASS-combined solution are established for each of the session lengths. Special care was taken to keep the GPS and GLONASS solutions fully consistent and comparable in particular where the station selection is concerned. We generate ERPs with a subdaily time resolution of about 1.4 hours (1/17 sidereal day). Using the session-specific normal equation systems (NEQs) containing the Earth rotation parameters with the 1.4 hours time resolution we derive in addition ERPs with a (sidereal) daily resolution. Note that this step requires the combination of the daily NEQs and a subsequent re-binning of 17 consecutive ERPs with 1/17 day time resolution into one (sidereal) daily parameter. These tests will reveal the impact of the session length on ERP

  19. Self-consistent technique for estimating the dynamic yield strength of a shock-loaded material

    International Nuclear Information System (INIS)

    Asay, J.R.; Lipkin, J.

    1978-01-01

    A technique is described for estimating the dynamic yield stress in a shocked material. This method employs reloading and unloading data from a shocked state along with a general assumption of yield and hardening behavior to estimate the yield stress in the precompressed state. No other data are necessary for this evaluation, and, therefore, the method has general applicability at high shock pressures and in materials undergoing phase transitions. In some special cases, it is also possible to estimate the complete state of stress in a shocked state. Using this method, the dynamic yield strength of aluminum at 2.06 GPa has been estimated to be 0.26 GPa. This value agrees reasonably well with previous estimates

  20. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images.

    Science.gov (United States)

    Zweerink, Alwin; Allaart, Cornelis P; Kuijer, Joost P A; Wu, LiNa; Beek, Aernout M; van de Ven, Peter M; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick; van Rossum, Albert C; Nijveldt, Robin

    2017-12-01

    Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. • Myocardial strain analysis could potentially improve patient selection for CRT. • Currently a well validated clinical approach to derive segmental strains is lacking. • The novel SLICE technique derives segmental strains from standard CMR cine images. • SLICE-derived strain markers of CRT response showed close agreement with CMR-TAG. • Future studies will focus on the prognostic value of SLICE in CRT candidates.

  1. Adaptive finite element techniques for the Maxwell equations using implicit a posteriori error estimates

    NARCIS (Netherlands)

    Harutyunyan, D.; Izsak, F.; van der Vegt, Jacobus J.W.; Bochev, Mikhail A.

    For the adaptive solution of the Maxwell equations on three-dimensional domains with N´ed´elec edge finite element methods, we consider an implicit a posteriori error estimation technique. On each element of the tessellation an equation for the error is formulated and solved with a properly chosen

  2. The Optical Fractionator Technique to Estimate Cell Numbers in a Rat Model of Electroconvulsive Therapy

    DEFF Research Database (Denmark)

    Olesen, Mikkel Vestergaard; Needham, Esther Kjær; Pakkenberg, Bente

    2017-01-01

    are too high to count manually, and stereology is now the technique of choice whenever estimates of three-dimensional quantities need to be extracted from measurements on two-dimensional sections. All stereological methods are in principle unbiased; however, they rely on proper knowledge about...

  3. A passive technique using SSNTDs for Estimation of thorium to uranium ratios in rocks

    International Nuclear Information System (INIS)

    Kenawy, M.A.; Sayyah, T.A.; Said, A.F.; Hafez, A.F.

    2005-01-01

    A passive technique using plastic nuclear track detectors (CR-39 and LR-115) is presented to estimate Th/U ratios and consequently the thorium and uranium content in granites taken from uranium exploration mines in Egyptian desert. The registration sensitivities of both CR-39 and LR-115 detector for close contact alpha-radiography uranium and thorium concentrations in ppm were computed

  4. Estimation of Anti-HIV Activity of HEPT Analogues Using MLR, ANN, and SVM Techniques

    Directory of Open Access Journals (Sweden)

    Basheerulla Shaik

    2013-01-01

    value than those of MLR and SVM techniques. Rm2= metrics and ridge regression analysis indicated that the proposed four-variable model MATS5e, RDF080u, T(O⋯O, and MATS5m as correlating descriptors is the best for estimating the anti-HIV activity (log 1/C present set of compounds.

  5. Detection of different-time-scale signals in the length of day variation based on EEMD analysis technique

    Directory of Open Access Journals (Sweden)

    Wenbin Shen

    2016-05-01

    Full Text Available Scientists pay great attention to different-time-scale signals in the length of day (LOD variations ΔLOD, which provide signatures of the Earth's interior structure, couplings among different layers, and potential excitations of ocean and atmosphere. In this study, based on the ensemble empirical mode decomposition (EEMD, we analyzed the latest time series of ΔLOD data spanning from January 1962 to March 2015. We observed the signals with periods and amplitudes of about 0.5 month and 0.19 ms, 1.0 month and 0.19 ms, 0.5 yr and 0.22 ms, 1.0 yr and 0.18 ms, 2.28 yr and 0.03 ms, 5.48 yr and 0.05 ms, respectively, in coincidence with the results of predecessors. In addition, some signals that were previously not definitely observed by predecessors were detected in this study, with periods and amplitudes of 9.13 d and 0.12 ms, 13.69 yr and 0.10 ms, respectively. The mechanisms of the LOD fluctuations of these two signals are still open.

  6. A review of sex estimation techniques during examination of skeletal remains in forensic anthropology casework.

    Science.gov (United States)

    Krishan, Kewal; Chatterjee, Preetika M; Kanchan, Tanuj; Kaur, Sandeep; Baryah, Neha; Singh, R K

    2016-04-01

    Sex estimation is considered as one of the essential parameters in forensic anthropology casework, and requires foremost consideration in the examination of skeletal remains. Forensic anthropologists frequently employ morphologic and metric methods for sex estimation of human remains. These methods are still very imperative in identification process in spite of the advent and accomplishment of molecular techniques. A constant boost in the use of imaging techniques in forensic anthropology research has facilitated to derive as well as revise the available population data. These methods however, are less reliable owing to high variance and indistinct landmark details. The present review discusses the reliability and reproducibility of various analytical approaches; morphological, metric, molecular and radiographic methods in sex estimation of skeletal remains. Numerous studies have shown a higher reliability and reproducibility of measurements taken directly on the bones and hence, such direct methods of sex estimation are considered to be more reliable than the other methods. Geometric morphometric (GM) method and Diagnose Sexuelle Probabiliste (DSP) method are emerging as valid methods and widely used techniques in forensic anthropology in terms of accuracy and reliability. Besides, the newer 3D methods are shown to exhibit specific sexual dimorphism patterns not readily revealed by traditional methods. Development of newer and better methodologies for sex estimation as well as re-evaluation of the existing ones will continue in the endeavour of forensic researchers for more accurate results. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  8. Ultra-small time-delay estimation via a weak measurement technique with post-selection

    International Nuclear Information System (INIS)

    Fang, Chen; Huang, Jing-Zheng; Yu, Yang; Li, Qinzheng; Zeng, Guihua

    2016-01-01

    Weak measurement is a novel technique for parameter estimation with higher precision. In this paper we develop a general theory for the parameter estimation based on a weak measurement technique with arbitrary post-selection. The weak-value amplification model and the joint weak measurement model are two special cases in our theory. Applying the developed theory, time-delay estimation is investigated in both theory and experiments. The experimental results show that when the time delay is ultra-small, the joint weak measurement scheme outperforms the weak-value amplification scheme, and is robust against not only misalignment errors but also the wavelength dependence of the optical components. These results are consistent with theoretical predictions that have not been previously verified by any experiment. (paper)

  9. Fast Spectral Velocity Estimation Using Adaptive Techniques: In-Vivo Results

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Udesen, Jesper

    2007-01-01

    Adaptive spectral estimation techniques are known to provide good spectral resolution and contrast even when the observation window(OW) is very sbort. In this paper two adaptive techniques are tested and compared to the averaged perlodogram (Welch) for blood velocity estimation. The Blood Power...... the blood process over slow-time and averaging over depth to find the power spectral density estimate. In this paper, the two adaptive methods are explained, and performance Is assessed in controlled steady How experiments and in-vivo measurements. The three methods were tested on a circulating How rig...... with a blood mimicking fluid flowing in the tube. The scanning section is submerged in water to allow ultrasound data acquisition. Data was recorded using a BK8804 linear array transducer and the RASMUS ultrasound scanner. The controlled experiments showed that the OW could be significantly reduced when...

  10. A novel technique to optimise the length of a linear accelerator treatment room maze without compromising radiation protection.

    Science.gov (United States)

    Al-Affan, I A M; Evans, S C; Qutub, M; Hugtenburg, R P

    2018-03-01

    Simulations with the FLUktuierende KAskade (FLUKA) Monte Carlo code were used to establish the possibility of introducing lead to cover the existing concrete walls of a linear accelerator treatment room maze, in order to reduce the dose of the scattered photons at the maze entrance. In the present work, a pilot study performed at Singleton Hospital in Swansea was used to pioneer the use of lead sheets of various thicknesses to absorb scattered low energy photons in the maze. The dose reduction was considered to be due to the strong effect of the photoelectric interaction in lead resulting in attenuation of the back-scattered photons. Calculations using FLUKA with mono-energetic photons were used to represent the main components of the x-ray spectrum up to 10 MV. Mono-energetic photons were used to enable the study of the behaviour of each energy component from the associated interaction processes. The results showed that adding lead of 1 to 4 mm thickness to the walls and floor of the maze reduced the dose at the maze entrance by up to 80%. Subsequent scatter dose measurements performed at the maze entrance of an existing treatment room with lead sheet of 1.3 mm thickness added to the maze walls and floor supported the results from the simulations. The dose reduction at the maze entrance with the lead in place was up to 50%. The variation between simulation and measurement was attributed to the fact that insufficient lead was available to completely cover the maze walls and floor. This novel proposal of partly, or entirely, covering the maze walls with lead a few millimetres in thickness has implications for the design of linear accelerator treatment rooms since it has the potential to provide savings, in terms of space and costs, when an existing maze requires upgrading in an environment where space is limited and the maze length cannot be extended sufficiently to reduce the dose.

  11. Precision of four otolith techniques for estimating age of white perch from a thermally altered reservoir

    Science.gov (United States)

    Snow, Richard A.; Porta, Michael J.; Long, James M.

    2018-01-01

    The White Perch Morone americana is an invasive species in many Midwestern states and is widely distributed in reservoir systems, yet little is known about the species' age structure and population dynamics. White Perch were first observed in Sooner Reservoir, a thermally altered cooling reservoir in Oklahoma, by the Oklahoma Department of Wildlife Conservation in 2006. It is unknown how thermally altered systems like Sooner Reservoir may affect the precision of White Perch age estimates. Previous studies have found that age structures from Largemouth Bass Micropterus salmoides and Bluegills Lepomis macrochirus from thermally altered reservoirs had false annuli, which increased error when estimating ages. Our objective was to quantify the precision of White Perch age estimates using four sagittal otolith preparation techniques (whole, broken, browned, and stained). Because Sooner Reservoir is thermally altered, we also wanted to identify the best month to collect a White Perch age sample based on aging precision. Ages of 569 White Perch (20–308 mm TL) were estimated using the four techniques. Age estimates from broken, stained, and browned otoliths ranged from 0 to 8 years; whole‐view otolith age estimates ranged from 0 to 7 years. The lowest mean coefficient of variation (CV) was obtained using broken otoliths, whereas the highest CV was observed using browned otoliths. July was the most precise month (lowest mean CV) for estimating age of White Perch, whereas April was the least precise month (highest mean CV). These results underscore the importance of knowing the best method to prepare otoliths for achieving the most precise age estimates and the best time of year to obtain those samples, as these factors may affect other estimates of population dynamics.

  12. Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques

    Directory of Open Access Journals (Sweden)

    Giancarmine Fasano

    2013-09-01

    Full Text Available An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.

  13. Length of Distal Resection Margin after Partial Mesorectal Excision for Upper Rectal Cancer Estimated by Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Bondeven, Peter; Hagemann-Madsen, Rikke Hjarnø; Bro, Lise

    BACKGROUND: Rectal cancer requires surgery for cure. Partial mesorectal excision (PME) is suggested for tumours in the upper rectum and implies transection of the mesorectum perpendicular to the bowel a minimum of 5 cm below the tumour. Reports have shown distal mesorectal tumour spread of up to 5...... cm from the primary tumour; therefore, guidelines for cancer of the upper rectum recommend PME with a distal resection margin (DRM) of at least 5 cm or total mesorectal excision (TME). PME exerts a hazard of removing less than 5 cm - leaving microscopic tumour cells that have spread in the mesorectum....... Studies at our department have shown inadequate DRM in 75 % of the patients estimated by post-operative MRI of the pelvis and by measurements of the histopathological specimen. Correspondingly, a higher rate of local recurrence in patients surgically treated with PME for rectal cancer - compared to TME...

  14. Forensic age estimation based on development of third molars: a staging technique for magnetic resonance imaging.

    Science.gov (United States)

    De Tobel, J; Phlypo, I; Fieuws, S; Politis, C; Verstraete, K L; Thevissen, P W

    2017-12-01

    The development of third molars can be evaluated with medical imaging to estimate age in subadults. The appearance of third molars on magnetic resonance imaging (MRI) differs greatly from that on radiographs. Therefore a specific staging technique is necessary to classify third molar development on MRI and to apply it for age estimation. To develop a specific staging technique to register third molar development on MRI and to evaluate its performance for age estimation in subadults. Using 3T MRI in three planes, all third molars were evaluated in 309 healthy Caucasian participants from 14 to 26 years old. According to the appearance of the developing third molars on MRI, descriptive criteria and schematic representations were established to define a specific staging technique. Two observers, with different levels of experience, staged all third molars independently with the developed technique. Intra- and inter-observer agreement were calculated. The data were imported in a Bayesian model for age estimation as described by Fieuws et al. (2016). This approach adequately handles correlation between age indicators and missing age indicators. It was used to calculate a point estimate and a prediction interval of the estimated age. Observed age minus predicted age was calculated, reflecting the error of the estimate. One-hundred and sixty-six third molars were agenetic. Five percent (51/1096) of upper third molars and 7% (70/1044) of lower third molars were not assessable. Kappa for inter-observer agreement ranged from 0.76 to 0.80. For intra-observer agreement kappa ranged from 0.80 to 0.89. However, two stage differences between observers or between staging sessions occurred in up to 2.2% (20/899) of assessments, probably due to a learning effect. Using the Bayesian model for age estimation, a mean absolute error of 2.0 years in females and 1.7 years in males was obtained. Root mean squared error equalled 2.38 years and 2.06 years respectively. The performance to

  15. Application of the control variate technique to estimation of total sensitivity indices

    International Nuclear Information System (INIS)

    Kucherenko, S.; Delpuech, B.; Iooss, B.; Tarantola, S.

    2015-01-01

    Global sensitivity analysis is widely used in many areas of science, biology, sociology and policy planning. The variance-based methods also known as Sobol' sensitivity indices has become the method of choice among practitioners due to its efficiency and ease of interpretation. For complex practical problems, estimation of Sobol' sensitivity indices generally requires a large number of function evaluations to achieve reasonable convergence. To improve the efficiency of the Monte Carlo estimates for the Sobol' total sensitivity indices we apply the control variate reduction technique and develop a new formula for evaluation of total sensitivity indices. Presented results using well known test functions show the efficiency of the developed technique. - Highlights: • We analyse the efficiency of the Monte Carlo estimates of Sobol' sensitivity indices. • The control variate technique is applied for estimation of total sensitivity indices. • We develop a new formula for evaluation of Sobol' total sensitivity indices. • We present test results demonstrating the high efficiency of the developed formula

  16. Using Intelligent Techniques in Construction Project Cost Estimation: 10-Year Survey

    Directory of Open Access Journals (Sweden)

    Abdelrahman Osman Elfaki

    2014-01-01

    Full Text Available Cost estimation is the most important preliminary process in any construction project. Therefore, construction cost estimation has the lion’s share of the research effort in construction management. In this paper, we have analysed and studied proposals for construction cost estimation for the last 10 years. To implement this survey, we have proposed and applied a methodology that consists of two parts. The first part concerns data collection, for which we have chosen special journals as sources for the surveyed proposals. The second part concerns the analysis of the proposals. To analyse each proposal, the following four questions have been set. Which intelligent technique is used? How have data been collected? How are the results validated? And which construction cost estimation factors have been used? From the results of this survey, two main contributions have been produced. The first contribution is the defining of the research gap in this area, which has not been fully covered by previous proposals of construction cost estimation. The second contribution of this survey is the proposal and highlighting of future directions for forthcoming proposals, aimed ultimately at finding the optimal construction cost estimation. Moreover, we consider the second part of our methodology as one of our contributions in this paper. This methodology has been proposed as a standard benchmark for construction cost estimation proposals.

  17. Forest parameter estimation using polarimetric SAR interferometry techniques at low frequencies

    International Nuclear Information System (INIS)

    Lee, Seung-Kuk

    2013-01-01

    Polarimetric Synthetic Aperture Radar Interferometry (Pol-InSAR) is an active radar remote sensing technique based on the coherent combination of both polarimetric and interferometric observables. The Pol-InSAR technique provided a step forward in quantitative forest parameter estimation. In the last decade, airborne SAR experiments evaluated the potential of Pol-InSAR techniques to estimate forest parameters (e.g., the forest height and biomass) with high accuracy over various local forest test sites. This dissertation addresses the actual status, potentials and limitations of Pol-InSAR inversion techniques for 3-D forest parameter estimations on a global scale using lower frequencies such as L- and P-band. The multi-baseline Pol-InSAR inversion technique is applied to optimize the performance with respect to the actual level of the vertical wave number and to mitigate the impact of temporal decorrelation on the Pol-InSAR forest parameter inversion. Temporal decorrelation is a critical issue for successful Pol-InSAR inversion in the case of repeat-pass Pol-InSAR data, as provided by conventional satellites or airborne SAR systems. Despite the limiting impact of temporal decorrelation in Pol-InSAR inversion, it remains a poorly understood factor in forest height inversion. Therefore, the main goal of this dissertation is to provide a quantitative estimation of the temporal decorrelation effects by using multi-baseline Pol-InSAR data. A new approach to quantify the different temporal decorrelation components is proposed and discussed. Temporal decorrelation coefficients are estimated for temporal baselines ranging from 10 minutes to 54 days and are converted to height inversion errors. In addition, the potential of Pol-InSAR forest parameter estimation techniques is addressed and projected onto future spaceborne system configurations and mission scenarios (Tandem-L and BIOMASS satellite missions at L- and P-band). The impact of the system parameters (e.g., bandwidth

  18. Estimation of performance shaping factors for overtime and shift length using expert judgment based on related assessments

    International Nuclear Information System (INIS)

    Vickroy, S.C.

    1986-01-01

    This paper presents the results of a study to estimate human performance and error rate under several amounts of overtime and different shifts without the use of human subjects. Ten chronobiology, fatigue, and shift scheduling experts were administered a questionnaire to rate the effects that several shifts and overtime amounts might have on the performance of individuals working under their constraints. The data from the ratings were transformed to generate performance shaping factors used for sensitivity analyses on three previously published probabilistic risk assessments. This procedure was performed to determine the effect that different shift schedules and amounts of overtime would have on overall plant performance. The results of the analysis suggest that the risk due to human errors caused by the shift scheduling and overtime could increase the risk of accidents at a nuclear power plant caused by humans by up to a factor of five. This could increase the chance for an accident at a nuclear power plant by a factor of about three

  19. A direct-measurement technique for estimating discharge-chamber lifetime. [for ion thrusters

    Science.gov (United States)

    Beattie, J. R.; Garvin, H. L.

    1982-01-01

    The use of short-term measurement techniques for predicting the wearout of ion thrusters resulting from sputter-erosion damage is investigated. The laminar-thin-film technique is found to provide high precision erosion-rate data, although the erosion rates are generally substantially higher than those found during long-term erosion tests, so that the results must be interpreted in a relative sense. A technique for obtaining absolute measurements is developed using a masked-substrate arrangement. This new technique provides a means for estimating the lifetimes of critical discharge-chamber components based on direct measurements of sputter-erosion depths obtained during short-duration (approximately 1 hr) tests. Results obtained using the direct-measurement technique are shown to agree with sputter-erosion depths calculated for the plasma conditions of the test. The direct-measurement approach is found to be applicable to both mercury and argon discharge-plasma environments and will be useful for estimating the lifetimes of inert gas and extended performance mercury ion thrusters currently under development.

  20. Recursive estimation techniques for detection of small objects in infrared image data

    Science.gov (United States)

    Zeidler, J. R.; Soni, T.; Ku, W. H.

    1992-04-01

    This paper describes a recursive detection scheme for point targets in infrared (IR) images. Estimation of the background noise is done using a weighted autocorrelation matrix update method and the detection statistic is calculated using a recursive technique. A weighting factor allows the algorithm to have finite memory and deal with nonstationary noise characteristics. The detection statistic is created by using a matched filter for colored noise, using the estimated noise autocorrelation matrix. The relationship between the weighting factor, the nonstationarity of the noise and the probability of detection is described. Some results on one- and two-dimensional infrared images are presented.

  1. First- and zero-sound velocity and Fermi liquid parameter F2s in liquid 3He determined by a path length modulation technique

    International Nuclear Information System (INIS)

    Hamot, P.J.; Lee, Y.; Sprague, D.T.

    1995-01-01

    We have measured the velocity of first- and zero-sound in liquid 3 He at 12.6 MHz over the pressure range of 0.6 to 14.5 bar using a path length modulation technique that we have recently developed. From these measurements, the pressure dependent value of the Fermi liquid parameter F 2 s was calculated and found to be larger at low pressure than previously reported. These new values of F 2 s indicate that transverse zero-sound is a propagating mode at all pressures. The new values are important for the interpretation of the frequencies of order parameter collective modes in the superfluid phases. The new acoustic technique permits measurements in regimes of very high attenuation with a sensitivity in phase velocity of about 10 ppm achieved by a feedback arrangement. The sound velocity is thus measured continuously throughout the highly attenuating crossover (ωt ∼ 1) regime, even at the lowest pressures

  2. Effective wind speed estimation: Comparison between Kalman Filter and Takagi-Sugeno observer techniques.

    Science.gov (United States)

    Gauterin, Eckhard; Kammerer, Philipp; Kühn, Martin; Schulte, Horst

    2016-05-01

    Advanced model-based control of wind turbines requires knowledge of the states and the wind speed. This paper benchmarks a nonlinear Takagi-Sugeno observer for wind speed estimation with enhanced Kalman Filter techniques: The performance and robustness towards model-structure uncertainties of the Takagi-Sugeno observer, a Linear, Extended and Unscented Kalman Filter are assessed. Hence the Takagi-Sugeno observer and enhanced Kalman Filter techniques are compared based on reduced-order models of a reference wind turbine with different modelling details. The objective is the systematic comparison with different design assumptions and requirements and the numerical evaluation of the reconstruction quality of the wind speed. Exemplified by a feedforward loop employing the reconstructed wind speed, the benefit of wind speed estimation within wind turbine control is illustrated. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Inverse estimation of the spheroidal particle size distribution using Ant Colony Optimization algorithms in multispectral extinction technique

    Science.gov (United States)

    He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming

    2014-10-01

    Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.

  4. Estimates of soil erosion and deposition of cultivated soil of Nakhla watershed, Morocco, using 137Cs technique and calibration models

    International Nuclear Information System (INIS)

    Bouhlassa, S.; Moukhchane, M.; Aiachi, A.

    2000-01-01

    Despite the effective threat of erosion, for soil preservation and productivity in Morocco, there is still only limited information on rates of soil loss involved. This study is aimed to establish long-term erosion rates on cultivated land in the Nakhla watershed located in the north of the country, using 137 Cs technique. Two sampling strategies were adopted. The first is aimed at establishing areal estimates of erosion, whereas the second, based on a transect approach, intends to determine point erosion. Twenty-one cultivated sites and seven undisturbed sites apparently not affected by erosion or deposition were sampled to 35 cm depth. Nine cores were collected along the transect of 149 m length. The assessment of erosion rates with models varying in complexity from the simple Proportional Model to more complex Mass Balance Models which attempts to include the processes controlling the redistribution of 137 Cs in soil, enables us to demonstrate the significance of soil erosion problem on cultivated land. Erosion rates rises up to 50 t ha -1 yr -1 . The 137 Cs derived erosion rates provide a reliable representation of water erosion pattern in the area, and indicate the importance of tillage process on the redistribution of 137 Cs in soil. For aggrading sites a Constant Rate Supply (CRS) Model had been adapted and introduced to estimate easily the depositional rate. (author) [fr

  5. Efficient Bayesian Compressed Sensing-based Channel Estimation Techniques for Massive MIMO-OFDM Systems

    OpenAIRE

    Al-Salihi, Hayder Qahtan Kshash; Nakhai, Mohammad Reza

    2017-01-01

    Efficient and highly accurate channel state information (CSI) at the base station (BS) is essential to achieve the potential benefits of massive multiple input multiple output (MIMO) systems. However, the achievable accuracy that is attainable is limited in practice due to the problem of pilot contamination. It has recently been shown that compressed sensing (CS) techniques can address the pilot contamination problem. However, CS-based channel estimation requires prior knowledge of channel sp...

  6. Coarse-grain bandwidth estimation techniques for large-scale network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, E.

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  7. Artificial intelligence techniques applied to hourly global irradiance estimation from satellite-derived cloud index

    Energy Technology Data Exchange (ETDEWEB)

    Zarzalejo, L.F.; Ramirez, L.; Polo, J. [DER-CIEMAT, Madrid (Spain). Renewable Energy Dept.

    2005-07-01

    Artificial intelligence techniques, such as fuzzy logic and neural networks, have been used for estimating hourly global radiation from satellite images. The models have been fitted to measured global irradiance data from 15 Spanish terrestrial stations. Both satellite imaging data and terrestrial information from the years 1994, 1995 and 1996 were used. The results of these artificial intelligence models were compared to a multivariate regression based upon Heliosat I model. A general better behaviour was observed for the artificial intelligence models. (author)

  8. Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther

    2013-01-01

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  9. Artificial intelligence techniques applied to hourly global irradiance estimation from satellite-derived cloud index

    International Nuclear Information System (INIS)

    Zarzalejo, Luis F.; Ramirez, Lourdes; Polo, Jesus

    2005-01-01

    Artificial intelligence techniques, such as fuzzy logic and neural networks, have been used for estimating hourly global radiation from satellite images. The models have been fitted to measured global irradiance data from 15 Spanish terrestrial stations. Both satellite imaging data and terrestrial information from the years 1994, 1995 and 1996 were used. The results of these artificial intelligence models were compared to a multivariate regression based upon Heliosat I model. A general better behaviour was observed for the artificial intelligence models

  10. PERFORMANCE ANALYSIS OF PILOT BASED CHANNEL ESTIMATION TECHNIQUES IN MB OFDM SYSTEMS

    Directory of Open Access Journals (Sweden)

    M. Madheswaran

    2011-12-01

    Full Text Available Ultra wideband (UWB communication is mainly used for short range of communication in wireless personal area networks. Orthogonal Frequency Division Multiplexing (OFDM is being used as a key physical layer technology for Fourth Generation (4G wireless communication. OFDM based communication gives high spectral efficiency and mitigates Inter-symbol Interference (ISI in a wireless medium. In this paper the IEEE 802.15.3a based Multiband OFDM (MB OFDM system is considered. The pilot based channel estimation techniques are considered to analyze the performance of MB OFDM systems over Liner Time Invariant (LTI Channel models. In this paper, pilot based Least Square (LS and Least Minimum Mean Square Error (LMMSE channel estimation technique has been considered for UWB OFDM system. In the proposed method, the estimated Channel Impulse Responses (CIRs are filtered in the time domain for the consideration of the channel delay spread. Also the performance of proposed system has been analyzed for different modulation techniques for various pilot density patterns.

  11. Data-driven techniques to estimate parameters in a rate-dependent ferromagnetic hysteresis model

    International Nuclear Information System (INIS)

    Hu Zhengzheng; Smith, Ralph C.; Ernstberger, Jon M.

    2012-01-01

    The quantification of rate-dependent ferromagnetic hysteresis is important in a range of applications including high speed milling using Terfenol-D actuators. There exist a variety of frameworks for characterizing rate-dependent hysteresis including the magnetic model in Ref. , the homogenized energy framework, Preisach formulations that accommodate after-effects, and Prandtl-Ishlinskii models. A critical issue when using any of these models to characterize physical devices concerns the efficient estimation of model parameters through least squares data fits. A crux of this issue is the determination of initial parameter estimates based on easily measured attributes of the data. In this paper, we present data-driven techniques to efficiently and robustly estimate parameters in the homogenized energy model. This framework was chosen due to its physical basis and its applicability to ferroelectric, ferromagnetic and ferroelastic materials.

  12. Three different applications of genetic algorithm (GA) search techniques on oil demand estimation

    International Nuclear Information System (INIS)

    Canyurt, Olcay Ersel; Oztuerk, Harun Kemal

    2006-01-01

    This present study develops three scenarios to analyze oil consumption and make future projections based on the Genetic algorithm (GA) notion, and examines the effect of the design parameters on the oil utilization values. The models developed in the non-linear form are applied to the oil demand of Turkey. The GA Oil Demand Estimation Model (GAODEM) is developed to estimate the future oil demand values based on Gross National Product (GNP), population, import, export, oil production, oil import and car, truck and bus sales figures. Among these models, the GA-PGOiTI model, which uses population, GNP, oil import, truck sales and import as design parameters/indicators, was found to provide the best fit solution with the observed data. It may be concluded that the proposed models can be used as alternative solution and estimation techniques for the future oil utilization values of any country

  13. Dynamic state estimation techniques for large-scale electric power systems

    International Nuclear Information System (INIS)

    Rousseaux, P.; Pavella, M.

    1991-01-01

    This paper presents the use of dynamic type state estimators for energy management in electric power systems. Various dynamic type estimators have been developed, but have never been implemented. This is primarily because of dimensionality problems posed by the conjunction of an extended Kalman filter with a large scale power system. This paper precisely focuses on how to circumvent the high dimensionality, especially prohibitive in the filtering step, by using a decomposition-aggregation hierarchical scheme; to appropriately model the power system dynamics, the authors introduce new state variables in the prediction step and rely on a load forecasting method. The combination of these two techniques succeeds in solving the overall dynamic state estimation problem not only in a tractable and realistic way, but also in compliance with real-time computational requirements. Further improvements are also suggested, bound to the specifics of the high voltage electric transmission systems

  14. Advances in estimation methods of vegetation water content based on optical remote sensing techniques

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Quantitative estimation of vegetation water content(VWC) using optical remote sensing techniques is helpful in forest fire as-sessment,agricultural drought monitoring and crop yield estimation.This paper reviews the research advances of VWC retrieval using spectral reflectance,spectral water index and radiative transfer model(RTM) methods.It also evaluates the reli-ability of VWC estimation using spectral water index from the observation data and the RTM.Focusing on two main definitions of VWC-the fuel moisture content(FMC) and the equivalent water thickness(EWT),the retrieval accuracies of FMC and EWT using vegetation water indices are analyzed.Moreover,the measured information and the dataset are used to estimate VWC,the results show there are significant correlations among three kinds of vegetation water indices(i.e.,WSI,NDⅡ,NDWI1640,WI/NDVI) and canopy FMC of winter wheat(n=45).Finally,the future development directions of VWC detection based on optical remote sensing techniques are also summarized.

  15. Innovative Techniques for Estimating Illegal Activities in a Human-Wildlife-Management Conflict

    Science.gov (United States)

    Cross, Paul; St. John, Freya A. V.; Khan, Saira; Petroczi, Andrea

    2013-01-01

    Effective management of biological resources is contingent upon stakeholder compliance with rules. With respect to disease management, partial compliance can undermine attempts to control diseases within human and wildlife populations. Estimating non-compliance is notoriously problematic as rule-breakers may be disinclined to admit to transgressions. However, reliable estimates of rule-breaking are critical to policy design. The European badger (Meles meles) is considered an important vector in the transmission and maintenance of bovine tuberculosis (bTB) in cattle herds. Land managers in high bTB prevalence areas of the UK can cull badgers under license. However, badgers are also known to be killed illegally. The extent of illegal badger killing is currently unknown. Herein we report on the application of three innovative techniques (Randomized Response Technique (RRT); projective questioning (PQ); brief implicit association test (BIAT)) for investigating illegal badger killing by livestock farmers across Wales. RRT estimated that 10.4% of farmers killed badgers in the 12 months preceding the study. Projective questioning responses and implicit associations relate to farmers' badger killing behavior reported via RRT. Studies evaluating the efficacy of mammal vector culling and vaccination programs should incorporate estimates of non-compliance. Mitigating the conflict concerning badgers as a vector of bTB requires cross-disciplinary scientific research, departure from deep-rooted positions, and the political will to implement evidence-based management. PMID:23341973

  16. Innovative techniques for estimating illegal activities in a human-wildlife-management conflict.

    Directory of Open Access Journals (Sweden)

    Paul Cross

    Full Text Available Effective management of biological resources is contingent upon stakeholder compliance with rules. With respect to disease management, partial compliance can undermine attempts to control diseases within human and wildlife populations. Estimating non-compliance is notoriously problematic as rule-breakers may be disinclined to admit to transgressions. However, reliable estimates of rule-breaking are critical to policy design. The European badger (Meles meles is considered an important vector in the transmission and maintenance of bovine tuberculosis (bTB in cattle herds. Land managers in high bTB prevalence areas of the UK can cull badgers under license. However, badgers are also known to be killed illegally. The extent of illegal badger killing is currently unknown. Herein we report on the application of three innovative techniques (Randomized Response Technique (RRT; projective questioning (PQ; brief implicit association test (BIAT for investigating illegal badger killing by livestock farmers across Wales. RRT estimated that 10.4% of farmers killed badgers in the 12 months preceding the study. Projective questioning responses and implicit associations relate to farmers' badger killing behavior reported via RRT. Studies evaluating the efficacy of mammal vector culling and vaccination programs should incorporate estimates of non-compliance. Mitigating the conflict concerning badgers as a vector of bTB requires cross-disciplinary scientific research, departure from deep-rooted positions, and the political will to implement evidence-based management.

  17. [Research Progress of Vitreous Humor Detection Technique on Estimation of Postmortem Interval].

    Science.gov (United States)

    Duan, W C; Lan, L M; Guo, Y D; Zha, L; Yan, J; Ding, Y J; Cai, J F

    2018-02-01

    Estimation of postmortem interval (PMI) plays a crucial role in forensic study and identification work. Because of the unique anatomy location, vitreous humor is considered to be used for estima- ting PMI, which has aroused interest among scholars, and some researches have been carried out. The detection techniques of vitreous humor are constantly developed and improved which have been gradually applied in forensic science, meanwhile, the study of PMI estimation using vitreous humor is updated rapidly. This paper reviews various techniques and instruments applied to vitreous humor detection, such as ion selective electrode, capillary ion analysis, spectroscopy, chromatography, nano-sensing technology, automatic biochemical analyser, flow cytometer, etc., as well as the related research progress on PMI estimation in recent years. In order to provide a research direction for scholars and promote a more accurate and efficient application in PMI estimation by vitreous humor analysis, some inner problems are also analysed in this paper. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  18. Innovative techniques for estimating illegal activities in a human-wildlife-management conflict.

    Science.gov (United States)

    Cross, Paul; St John, Freya A V; Khan, Saira; Petroczi, Andrea

    2013-01-01

    Effective management of biological resources is contingent upon stakeholder compliance with rules. With respect to disease management, partial compliance can undermine attempts to control diseases within human and wildlife populations. Estimating non-compliance is notoriously problematic as rule-breakers may be disinclined to admit to transgressions. However, reliable estimates of rule-breaking are critical to policy design. The European badger (Meles meles) is considered an important vector in the transmission and maintenance of bovine tuberculosis (bTB) in cattle herds. Land managers in high bTB prevalence areas of the UK can cull badgers under license. However, badgers are also known to be killed illegally. The extent of illegal badger killing is currently unknown. Herein we report on the application of three innovative techniques (Randomized Response Technique (RRT); projective questioning (PQ); brief implicit association test (BIAT)) for investigating illegal badger killing by livestock farmers across Wales. RRT estimated that 10.4% of farmers killed badgers in the 12 months preceding the study. Projective questioning responses and implicit associations relate to farmers' badger killing behavior reported via RRT. Studies evaluating the efficacy of mammal vector culling and vaccination programs should incorporate estimates of non-compliance. Mitigating the conflict concerning badgers as a vector of bTB requires cross-disciplinary scientific research, departure from deep-rooted positions, and the political will to implement evidence-based management.

  19. Comparison of process estimation techniques for on-line calibration monitoring

    International Nuclear Information System (INIS)

    Shumaker, B. D.; Hashemian, H. M.; Morton, G. W.

    2006-01-01

    The goal of on-line calibration monitoring is to reduce the number of unnecessary calibrations performed each refueling cycle on pressure, level, and flow transmitters in nuclear power plants. The effort requires a baseline for determining calibration drift and thereby the need for a calibration. There are two ways to establish the baseline: averaging and modeling. Averaging techniques have proven to be highly successful in the applications when there are a large number of redundant transmitters; but, for systems with little or no redundancy, averaging methods are not always reliable. That is, for non-redundant transmitters, more sophisticated process estimation techniques are needed to augment or replace the averaging techniques. This paper explores three well-known process estimation techniques; namely Independent Component Analysis (ICA), Auto-Associative Neural Networks (AANN), and Auto-Associative Kernel Regression (AAKR). Using experience and data from an operating nuclear plant, the paper will present an evaluation of the effectiveness of these methods in detecting transmitter drift in actual plant conditions. (authors)

  20. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  1. Development of technique for estimating primary cooling system break diameter in predicting nuclear emergency event sequence

    International Nuclear Information System (INIS)

    Tatebe, Yasumasa; Yoshida, Yoshitaka

    2012-01-01

    If an emergency event occurs in a nuclear power plant, appropriate action is selected and taken in accordance with the plant status, which changes from time to time, in order to prevent escalation and mitigate the event consequences. It is thus important to predict the event sequence and identify the plant behavior resulting from the action taken. In predicting the event sequence during a loss-of-coolant accident (LOCA), it is necessary to estimate break diameter. The conventional method for this estimation is time-consuming, since it involves multiple sensitivity analyses to determine the break diameter that is consistent with the plant behavior. To speed up the process of predicting the nuclear emergency event sequence, a new break diameter estimation technique that is applicable to pressurized water reactors was developed in this study. This technique enables the estimation of break diameter using the plant data sent from the safety parameter display system (SPDS), with focus on the depressurization rate in the reactor cooling system (RCS) during LOCA. The results of LOCA analysis, performed by varying the break diameter using the MAAP4 and RELAP5/MOD3.2 codes, confirmed that the RCS depressurization rate could be expressed by the log linear function of break diameter, except in the case of a small leak, in which RCS depressurization is affected by the coolant charging system and the high-pressure injection system. A correlation equation for break diameter estimation was developed from this function and tested for accuracy. Testing verified that the correlation equation could estimate break diameter accurately within an error of approximately 16%, even if the leak increases gradually, changing the plant status. (author)

  2. Technical Note: On the efficiency of variance reduction techniques for Monte Carlo estimates of imaging noise.

    Science.gov (United States)

    Sharma, Diksha; Sempau, Josep; Badano, Aldo

    2018-02-01

    Monte Carlo simulations require large number of histories to obtain reliable estimates of the quantity of interest and its associated statistical uncertainty. Numerous variance reduction techniques (VRTs) have been employed to increase computational efficiency by reducing the statistical uncertainty. We investigate the effect of two VRTs for optical transport methods on accuracy and computing time for the estimation of variance (noise) in x-ray imaging detectors. We describe two VRTs. In the first, we preferentially alter the direction of the optical photons to increase detection probability. In the second, we follow only a fraction of the total optical photons generated. In both techniques, the statistical weight of photons is altered to maintain the signal mean. We use fastdetect2, an open-source, freely available optical transport routine from the hybridmantis package. We simulate VRTs for a variety of detector models and energy sources. The imaging data from the VRT simulations are then compared to the analog case (no VRT) using pulse height spectra, Swank factor, and the variance of the Swank estimate. We analyze the effect of VRTs on the statistical uncertainty associated with Swank factors. VRTs increased the relative efficiency by as much as a factor of 9. We demonstrate that we can achieve the same variance of the Swank factor with less computing time. With this approach, the simulations can be stopped when the variance of the variance estimates reaches the desired level of uncertainty. We implemented analytic estimates of the variance of Swank factor and demonstrated the effect of VRTs on image quality calculations. Our findings indicate that the Swank factor is dominated by the x-ray interaction profile as compared to the additional uncertainty introduced in the optical transport by the use of VRTs. For simulation experiments that aim at reducing the uncertainty in the Swank factor estimate, any of the proposed VRT can be used for increasing the relative

  3. SIMULTANEOUS ESTIMATION OF PHOTOMETRIC REDSHIFTS AND SED PARAMETERS: IMPROVED TECHNIQUES AND A REALISTIC ERROR BUDGET

    International Nuclear Information System (INIS)

    Acquaviva, Viviana; Raichoor, Anand; Gawiser, Eric

    2015-01-01

    We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties in the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the multi-dimensional probability distribution function in SED fitting + z parameter space, including all correlations. While the performance of joint SED fitting and photo-z estimation might be hindered by template incompleteness, we demonstrate that the latter is “flagged” by a large fraction of outliers in redshift, and that significant improvements can be achieved by using flexible stellar populations synthesis models and more realistic star formation histories. In all cases, we find that the median stellar age is better recovered than the time elapsed from the onset of star formation. Finally, we show that using a photometric redshift code such as EAZY to obtain redshift probability distributions that are then used as priors for SED fitting codes leads to only a modest bias in the SED fitting parameters and is thus a viable alternative to the simultaneous estimation of SED parameters and photometric redshifts

  4. A analytical comparison of different estimators for the density distribution of the catalyst in a experimental riser by a gammametric technique

    International Nuclear Information System (INIS)

    Lima, Emerson Alexandre de Oliveira; Dantas Carlos C.; Melo, Silvio de Barros; Santos, Valdemir Alexandre dos

    2005-01-01

    In this paper, we solve the following questions: what will be the estimate of the r = r (x, y, z) function format? Which method would describe the density distribution function more precisely? Which is the best estimator? Also, once the ρ=ρ(x, y, z) format and the approximation technique is defined, comes the experimental arrangement and pass length configuration, which are the next problems to be solved. Finding the best parameter estimation for the ρ=ρ(x, y, z) function according to the C pass lengths and their spatial configuration. The latter is required to define the ρ=ρ(x, y, z) function resolution and the mechanical scanning movements of the arrangement. Such definitions will be implemented for an automate arrangement, by a computational program, to further development of the reconstruction of catalyst density distribution on experimental risers.. The precision evaluation was finally compared to the arrangement geometry that yields the best pass length spatial configuration. The results are shown in graphics for the two known density distributions. As a conclusion, the parameters for an automate arrangement design, are given under the required precision for the catalyst distribution reconstruction. (author)

  5. ℋ-matrix techniques for approximating large covariance matrices and estimating its parameters

    KAUST Repository

    Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Keyes, David E.

    2016-01-01

    In this work the task is to use the available measurements to estimate unknown hyper-parameters (variance, smoothness parameter and covariance length) of the covariance function. We do it by maximizing the joint log-likelihood function. This is a non-convex and non-linear problem. To overcome cubic complexity in linear algebra, we approximate the discretised covariance function in the hierarchical (ℋ-) matrix format. The ℋ-matrix format has a log-linear computational cost and storage O(knlogn), where rank k is a small integer. On each iteration step of the optimization procedure the covariance matrix itself, its determinant and its Cholesky decomposition are recomputed within ℋ-matrix format. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)

  6. ℋ-matrix techniques for approximating large covariance matrices and estimating its parameters

    KAUST Repository

    Litvinenko, Alexander

    2016-10-25

    In this work the task is to use the available measurements to estimate unknown hyper-parameters (variance, smoothness parameter and covariance length) of the covariance function. We do it by maximizing the joint log-likelihood function. This is a non-convex and non-linear problem. To overcome cubic complexity in linear algebra, we approximate the discretised covariance function in the hierarchical (ℋ-) matrix format. The ℋ-matrix format has a log-linear computational cost and storage O(knlogn), where rank k is a small integer. On each iteration step of the optimization procedure the covariance matrix itself, its determinant and its Cholesky decomposition are recomputed within ℋ-matrix format. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)

  7. Uncertainty estimates of a GRACE inversion modelling technique over Greenland using a simulation

    Science.gov (United States)

    Bonin, Jennifer; Chambers, Don

    2013-07-01

    The low spatial resolution of GRACE causes leakage, where signals in one location spread out into nearby regions. Because of this leakage, using simple techniques such as basin averages may result in an incorrect estimate of the true mass change in a region. A fairly simple least squares inversion technique can be used to more specifically localize mass changes into a pre-determined set of basins of uniform internal mass distribution. However, the accuracy of these higher resolution basin mass amplitudes has not been determined, nor is it known how the distribution of the chosen basins affects the results. We use a simple `truth' model over Greenland as an example case, to estimate the uncertainties of this inversion method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We determine that an appropriate level of smoothing (300-400 km) and process noise (0.30 cm2 of water) gets the best results. The trends of the Greenland internal basins and Iceland can be reasonably estimated with this method, with average systematic errors of 3.5 cm yr-1 per basin. The largest mass losses found from GRACE RL04 occur in the coastal northwest (-19.9 and -33.0 cm yr-1) and southeast (-24.2 and -27.9 cm yr-1), with small mass gains (+1.4 to +7.7 cm yr-1) found across the northern interior. Acceleration of mass change is measurable at the 95 per cent confidence level in four northwestern basins, but not elsewhere in Greenland. Due to an insufficiently detailed distribution of basins across internal Canada, the trend estimates of Baffin and Ellesmere Islands are expected to be incorrect due to systematic errors caused by the inversion technique.

  8. A method for the estimation of hydration state during hemodialysis using a calf bioimpedance technique

    International Nuclear Information System (INIS)

    Zhu, F; Kuhlmann, M K; Kotanko, P; Seibert, E; Levin, N W; Leonard, E F

    2008-01-01

    Although many methods have been utilized to measure degrees of body hydration, and in particular to estimate normal hydration states (dry weight, DW) in hemodialysis (HD) patients, no accurate methods are currently available for clinical use. Biochemcial measurements are not sufficiently precise and vena cava diameter estimation is impractical. Several bioimpedance methods have been suggested to provide information to estimate clinical hydration and nutritional status, such as phase angle measurement and ratio of body fluid compartment volumes to body weight. In this study, we present a calf bioimpedance spectroscopy (cBIS) technique to monitor calf resistance and resistivity continuously during HD. Attainment of DW is defined by two criteria: (1) the primary criterion is flattening of the change in the resistance curve during dialysis so that at DW little further change is observed and (2) normalized resistivity is in the range of observation of healthy subjects. Twenty maintenance HD patients (12 M/8 F) were studied on 220 occasions. After three baseline (BL) measurements, with patients at their DW prescribed on clinical grounds (DW Clin ), the target post-dialysis weight was gradually decreased in the course of several treatments until the two dry weight criteria outlined above were met (DW cBIS ). Post-dialysis weight was reduced from 78.3 ± 28 to 77.1 ± 27 kg (p −2 Ω m 3 kg −1 (p cBIS was 0.3 ± 0.2%. The results indicate that cBIS utilizing a dynamic technique continuously during dialysis is an accurate and precise approach to specific end points for the estimation of body hydration status. Since no current techniques have been developed to detect DW as precisely, it is suggested as a standard to be evaluated clinically

  9. A technique for estimating 4D-CBCT using prior knowledge and limited-angle projections

    International Nuclear Information System (INIS)

    Zhang, You; Yin, Fang-Fang; Ren, Lei; Segars, W. Paul

    2013-01-01

    Purpose: To develop a technique to estimate onboard 4D-CBCT using prior information and limited-angle projections for potential 4D target verification of lung radiotherapy.Methods: Each phase of onboard 4D-CBCT is considered as a deformation from one selected phase (prior volume) of the planning 4D-CT. The deformation field maps (DFMs) are solved using a motion modeling and free-form deformation (MM-FD) technique. In the MM-FD technique, the DFMs are estimated using a motion model which is extracted from planning 4D-CT based on principal component analysis (PCA). The motion model parameters are optimized by matching the digitally reconstructed radiographs of the deformed volumes to the limited-angle onboard projections (data fidelity constraint). Afterward, the estimated DFMs are fine-tuned using a FD model based on data fidelity constraint and deformation energy minimization. The 4D digital extended-cardiac-torso phantom was used to evaluate the MM-FD technique. A lung patient with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume, including changes of respiration amplitude, lesion size and lesion average-position, and phase shift between lesion and body respiratory cycle. The lesions were contoured in both the estimated and “ground-truth” onboard 4D-CBCT for comparison. 3D volume percentage-difference (VPD) and center-of-mass shift (COMS) were calculated to evaluate the estimation accuracy of three techniques: MM-FD, MM-only, and FD-only. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy.Results: For all simulated patient and projection acquisition scenarios, the mean VPD (±S.D.)/COMS (±S.D.) between lesions in prior images and “ground-truth” onboard images were 136.11% (±42.76%)/15.5 mm (±3.9 mm). Using orthogonal-view 15°-each scan angle, the mean VPD/COMS between the lesion

  10. Food consumption and digestion time estimation of spotted scat, Scatophagus argus, using X-radiography technique

    Energy Technology Data Exchange (ETDEWEB)

    Hashim, Marina; Abidin, Diana Atiqah Zainal [School of Environmental and Natural Resource Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor (Malaysia); Das, Simon K. [Marine Ecosystem Research Centre, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, 43600 UKM Bangi (Malaysia); Ghaffar, Mazlan Abd. [School of Environmental and Natural Resource Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor, Malaysia and Marine Ecosystem Research Centre, Faculty of Science and Technology, Universiti Kebangsaan M (Malaysia)

    2014-09-03

    The present study was conducted to investigate the food consumption pattern and gastric emptying time using x-radiography technique in scats fish, Scatophagus argus feeding to satiation in laboratory conditions. Prior to feeding experiment, fish of various sizes were examined their stomach volume, using freshly prepared stomachs ligatured at the tips of the burret, where the maximum amount of distilled water collected in the stomach were measured (ml). Stomach volume is correlated with maximum food intake (S{sub max}) and it can estimate the maximum stomach distension by allometric model i.e volume=0.0000089W{sup 2.93}. Gastric emptying time was estimated using a qualitative X-radiography technique, where the fish of various sizes were fed to satiation at different time since feeding. All the experimental fish was feed into satiation using radio-opaque barium sulphate (BaSO{sub 4}) paste injected in the wet shrimp in proportion to the body weight. The BaSO{sub 4} was found suitable to track the movement of feed/prey in the stomach over time and gastric emptying time of scats fish can be estimated. The results of qualitative X-Radiography observation of gastric motility, showed the fish (200 gm) that fed to maximum satiation meal (circa 11 gm) completely emptied their stomach within 30 - 36 hrs. The results of the present study will provide the first baseline information on the stomach volume, gastric emptying of scats fish in captivity.

  11. A method for the estimation of hydration state during hemodialysis using a calf bioimpedance technique.

    Science.gov (United States)

    Zhu, F; Kuhlmann, M K; Kotanko, P; Seibert, E; Leonard, E F; Levin, N W

    2008-06-01

    Although many methods have been utilized to measure degrees of body hydration, and in particular to estimate normal hydration states (dry weight, DW) in hemodialysis (HD) patients, no accurate methods are currently available for clinical use. Biochemcial measurements are not sufficiently precise and vena cava diameter estimation is impractical. Several bioimpedance methods have been suggested to provide information to estimate clinical hydration and nutritional status, such as phase angle measurement and ratio of body fluid compartment volumes to body weight. In this study, we present a calf bioimpedance spectroscopy (cBIS) technique to monitor calf resistance and resistivity continuously during HD. Attainment of DW is defined by two criteria: (1) the primary criterion is flattening of the change in the resistance curve during dialysis so that at DW little further change is observed and (2) normalized resistivity is in the range of observation of healthy subjects. Twenty maintenance HD patients (12 M/8 F) were studied on 220 occasions. After three baseline (BL) measurements, with patients at their DW prescribed on clinical grounds (DW(Clin)), the target post-dialysis weight was gradually decreased in the course of several treatments until the two dry weight criteria outlined above were met (DW(cBIS)). Post-dialysis weight was reduced from 78.3 +/- 28 to 77.1 +/- 27 kg (p hydration status. Since no current techniques have been developed to detect DW as precisely, it is suggested as a standard to be evaluated clinically.

  12. Food consumption and digestion time estimation of spotted scat, Scatophagus argus, using X-radiography technique

    Science.gov (United States)

    Hashim, Marina; Abidin, Diana Atiqah Zainal; Das, Simon K.; Ghaffar, Mazlan Abd.

    2014-09-01

    The present study was conducted to investigate the food consumption pattern and gastric emptying time using x-radiography technique in scats fish, Scatophagus argus feeding to satiation in laboratory conditions. Prior to feeding experiment, fish of various sizes were examined their stomach volume, using freshly prepared stomachs ligatured at the tips of the burret, where the maximum amount of distilled water collected in the stomach were measured (ml). Stomach volume is correlated with maximum food intake (Smax) and it can estimate the maximum stomach distension by allometric model i.e volume=0.0000089W2.93. Gastric emptying time was estimated using a qualitative X-radiography technique, where the fish of various sizes were fed to satiation at different time since feeding. All the experimental fish was feed into satiation using radio-opaque barium sulphate (BaSO4) paste injected in the wet shrimp in proportion to the body weight. The BaSO4 was found suitable to track the movement of feed/prey in the stomach over time and gastric emptying time of scats fish can be estimated. The results of qualitative X-Radiography observation of gastric motility, showed the fish (200 gm) that fed to maximum satiation meal (circa 11 gm) completely emptied their stomach within 30 - 36 hrs. The results of the present study will provide the first baseline information on the stomach volume, gastric emptying of scats fish in captivity.

  13. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Estimation of flood environmental effects using flood zone mapping techniques in Halilrood Kerman, Iran.

    Science.gov (United States)

    Boudaghpour, Siamak; Bagheri, Majid; Bagheri, Zahra

    2014-01-01

    High flood occurrences with large environmental damages have a growing trend in Iran. Dynamic movements of water during a flood cause different environmental damages in geographical areas with different characteristics such as topographic conditions. In general, environmental effects and damages caused by a flood in an area can be investigated from different points of view. The current essay is aiming at detecting environmental effects of flood occurrences in Halilrood catchment area of Kerman province in Iran using flood zone mapping techniques. The intended flood zone map was introduced in four steps. Steps 1 to 3 pave the way to calculate and estimate flood zone map in the understudy area while step 4 determines the estimation of environmental effects of flood occurrence. Based on our studies, wide range of accuracy for estimating the environmental effects of flood occurrence was introduced by using of flood zone mapping techniques. Moreover, it was identified that the existence of Jiroft dam in the study area can decrease flood zone from 260 hectares to 225 hectares and also it can decrease 20% of flood peak intensity. As a result, 14% of flood zone in the study area can be saved environmentally.

  15. Food consumption and digestion time estimation of spotted scat, Scatophagus argus, using X-radiography technique

    International Nuclear Information System (INIS)

    Hashim, Marina; Abidin, Diana Atiqah Zainal; Das, Simon K.; Ghaffar, Mazlan Abd.

    2014-01-01

    The present study was conducted to investigate the food consumption pattern and gastric emptying time using x-radiography technique in scats fish, Scatophagus argus feeding to satiation in laboratory conditions. Prior to feeding experiment, fish of various sizes were examined their stomach volume, using freshly prepared stomachs ligatured at the tips of the burret, where the maximum amount of distilled water collected in the stomach were measured (ml). Stomach volume is correlated with maximum food intake (S max ) and it can estimate the maximum stomach distension by allometric model i.e volume=0.0000089W 2.93 . Gastric emptying time was estimated using a qualitative X-radiography technique, where the fish of various sizes were fed to satiation at different time since feeding. All the experimental fish was feed into satiation using radio-opaque barium sulphate (BaSO 4 ) paste injected in the wet shrimp in proportion to the body weight. The BaSO 4 was found suitable to track the movement of feed/prey in the stomach over time and gastric emptying time of scats fish can be estimated. The results of qualitative X-Radiography observation of gastric motility, showed the fish (200 gm) that fed to maximum satiation meal (circa 11 gm) completely emptied their stomach within 30 - 36 hrs. The results of the present study will provide the first baseline information on the stomach volume, gastric emptying of scats fish in captivity

  16. Performance Comparison of Adaptive Estimation Techniques for Power System Small-Signal Stability Assessment

    Directory of Open Access Journals (Sweden)

    E. A. Feilat

    2010-12-01

    Full Text Available This paper demonstrates the assessment of the small-signal stability of a single-machine infinite- bus power system under widely varying loading conditions using the concept of synchronizing and damping torques coefficients. The coefficients are calculated from the time responses of the rotor angle, speed, and torque of the synchronous generator. Three adaptive computation algorithms including Kalman filtering, Adaline, and recursive least squares have been compared to estimate the synchronizing and damping torque coefficients. The steady-state performance of the three adaptive techniques is compared with the conventional static least squares technique by conducting computer simulations at different loading conditions. The algorithms are compared to each other in terms of speed of convergence and accuracy. The recursive least squares estimation offers several advantages including significant reduction in computing time and computational complexity. The tendency of an unsupplemented static exciter to degrade the system damping for medium and heavy loading is verified. Consequently, a power system stabilizer whose parameters are adjusted to compensate for variations in the system loading is designed using phase compensation method. The effectiveness of the stabilizer in enhancing the dynamic stability over wide range of operating conditions is verified through the calculation of the synchronizing and damping torque coefficients using recursive least square technique.

  17. On the estimation of the current density in space plasmas: Multi- versus single-point techniques

    Science.gov (United States)

    Perri, Silvia; Valentini, Francesco; Sorriso-Valvo, Luca; Reda, Antonio; Malara, Francesco

    2017-06-01

    Thanks to multi-spacecraft mission, it has recently been possible to directly estimate the current density in space plasmas, by using magnetic field time series from four satellites flying in a quasi perfect tetrahedron configuration. The technique developed, commonly called ;curlometer; permits a good estimation of the current density when the magnetic field time series vary linearly in space. This approximation is generally valid for small spacecraft separation. The recent space missions Cluster and Magnetospheric Multiscale (MMS) have provided high resolution measurements with inter-spacecraft separation up to 100 km and 10 km, respectively. The former scale corresponds to the proton gyroradius/ion skin depth in ;typical; solar wind conditions, while the latter to sub-proton scale. However, some works have highlighted an underestimation of the current density via the curlometer technique with respect to the current computed directly from the velocity distribution functions, measured at sub-proton scales resolution with MMS. In this paper we explore the limit of the curlometer technique studying synthetic data sets associated to a cluster of four artificial satellites allowed to fly in a static turbulent field, spanning a wide range of relative separation. This study tries to address the relative importance of measuring plasma moments at very high resolution from a single spacecraft with respect to the multi-spacecraft missions in the current density evaluation.

  18. Signal Processing of Ground Penetrating Radar Using Spectral Estimation Techniques to Estimate the Position of Buried Targets

    Directory of Open Access Journals (Sweden)

    Shanker Man Shrestha

    2003-11-01

    Full Text Available Super-resolution is very important for the signal processing of GPR (ground penetration radar to resolve closely buried targets. However, it is not easy to get high resolution as GPR signals are very weak and enveloped by the noise. The MUSIC (multiple signal classification algorithm, which is well known for its super-resolution capacity, has been implemented for signal and image processing of GPR. In addition, conventional spectral estimation technique, FFT (fast Fourier transform, has also been implemented for high-precision receiving signal level. In this paper, we propose CPM (combined processing method, which combines time domain response of MUSIC algorithm and conventional IFFT (inverse fast Fourier transform to obtain a super-resolution and high-precision signal level. In order to support the proposal, detailed simulation was performed analyzing SNR (signal-to-noise ratio. Moreover, a field experiment at a research field and a laboratory experiment at the University of Electro-Communications, Tokyo, were also performed for thorough investigation and supported the proposed method. All the simulation and experimental results are presented.

  19. A semi-analytical method to estimate the effective slip length of spreading spherical-cap shaped droplets using Cox theory

    Science.gov (United States)

    Wörner, M.; Cai, X.; Alla, H.; Yue, P.

    2018-03-01

    The Cox–Voinov law on dynamic spreading relates the difference between the cubic values of the apparent contact angle (θ) and the equilibrium contact angle to the instantaneous contact line speed (U). Comparing spreading results with this hydrodynamic wetting theory requires accurate data of θ and U during the entire process. We consider the case when gravitational forces are negligible, so that the shape of the spreading drop can be closely approximated by a spherical cap. Using geometrical dependencies, we transform the general Cox law in a semi-analytical relation for the temporal evolution of the spreading radius. Evaluating this relation numerically shows that the spreading curve becomes independent from the gas viscosity when the latter is less than about 1% of the drop viscosity. Since inertia may invalidate the made assumptions in the initial stage of spreading, a quantitative criterion for the time when the spherical-cap assumption is reasonable is derived utilizing phase-field simulations on the spreading of partially wetting droplets. The developed theory allows us to compare experimental/computational spreading curves for spherical-cap shaped droplets with Cox theory without the need for instantaneous data of θ and U. Furthermore, the fitting of Cox theory enables us to estimate the effective slip length. This is potentially useful for establishing relationships between slip length and parameters in numerical methods for moving contact lines.

  20. Optimization of Water/Oil/Surfactant System for Preparation of Medium-Chain-Length Poly-3-Hydroxyalkanoates (mcl-PHA)-Incorporated Nanoparticles via Nanoemulsion Templating Technique.

    Science.gov (United States)

    Ishak, K A; Annuar, M Suffian M; Ahmad, N

    2017-12-01

    Polymeric nanoparticles gain a widespread interest in food and pharmaceutical industries as delivery systems that encapsulate, protect, and release lipophilic compounds such as omega-3 fatty acids, fat-soluble vitamins, carotenoids, carvedilol, cyclosporine, and ketoprofen. In this study, medium-chain-length poly-3-hydroxyalkanoate (mcl-PHA)-incorporated nanoparticle was developed via facile organic solvent-free nanoemulsion templating technique. The water content (W/surfactant-to-oil (S/O)), S/O, and Cremophor EL-to-Span 80 (Cremo/Sp80) ratios were first optimized using response surface methodology (RSM) to obtain nanoemulsion template prior to incorporation of mcl-PHA. Their effects on nanoemulsion formation were investigated. The mcl-PHA-incorporated nanoparticle system showed a good preservation capability of β-carotene and extended storage stability.

  1. The evolutionary rates of HCV estimated with subtype 1a and 1b sequences over the ORF length and in different genomic regions.

    Directory of Open Access Journals (Sweden)

    Manqiong Yuan

    Full Text Available Considerable progress has been made in the HCV evolutionary analysis, since the software BEAST was released. However, prior information, especially the prior evolutionary rate, which plays a critical role in BEAST analysis, is always difficult to ascertain due to various uncertainties. Providing a proper prior HCV evolutionary rate is thus of great importance.176 full-length sequences of HCV subtype 1a and 144 of 1b were assembled by taking into consideration the balance of the sampling dates and the even dispersion in phylogenetic trees. According to the HCV genomic organization and biological functions, each dataset was partitioned into nine genomic regions and two routinely amplified regions. A uniform prior rate was applied to the BEAST analysis for each region and also the entire ORF. All the obtained posterior rates for 1a are of a magnitude of 10(-3 substitutions/site/year and in a bell-shaped distribution. Significantly lower rates were estimated for 1b and some of the rate distribution curves resulted in a one-sided truncation, particularly under the exponential model. This indicates that some of the rates for subtype 1b are less accurate, so they were adjusted by including more sequences to improve the temporal structure.Among the various HCV subtypes and genomic regions, the evolutionary patterns are dissimilar. Therefore, an applied estimation of the HCV epidemic history requires the proper selection of the rate priors, which should match the actual dataset so that they can fit for the subtype, the genomic region and even the length. By referencing the findings here, future evolutionary analysis of the HCV subtype 1a and 1b datasets may become more accurate and hence prove useful for tracing their patterns.

  2. Estimation of the Latitude, the Gnomon’s Length and Position About Sinbeop-Jipyeong-Ilgu in the Late of Joseon Dynasty

    Directory of Open Access Journals (Sweden)

    Byeong-Hee Mihn

    2017-06-01

    Full Text Available In this study, the characteristics of a horizontal sundial from the Joseon Dynasty were investigated. Korea’s Treasure No. 840 (T840 is a Western-style horizontal sundial where hour-lines and solar-term-lines are engraved. The inscription of this sundial indicates that the latitude (altitude of the north celestial pole is 37° 39´, but the gnomon is lost. In the present study, the latitude of the sundial and the length of the gnomon were estimated based only on the hour-lines and solar-termlines of the horizontal sundial. When statistically calculated from the convergent point obtained by extending the hourlines, the latitude of this sundial was 37° 15´ ± 26´, which showed a 24´ difference from the record of the inscription. When it was also assumed that a convergent point is changeable, the estimation of the sundial’s latitude was found to be sensitive to the variation of this point. This study found that T840 used a vertical gnomon, that is, perpendicular to the horizontal plane, rather than an inclined triangular gnomon, and a horn-shaped mark like a vertical gnomon is cut on its surface. The length of the gnomon engraved on the artifact was 43.1 mm, and in the present study was statistically calculated as 43.7 ± 0.7 mm. In addition, the position of the gnomon according to the original inscription and our calculation showed an error of 0.3 mm.

  3. A Method for Estimating the Aerodynamic Roughness Length with NDVI and BRDF Signatures Using Multi-Temporal Proba-V Data

    Directory of Open Access Journals (Sweden)

    Mingzhao Yu

    2016-12-01

    Full Text Available Aerodynamic roughness length is an important parameter for surface fluxes estimates. This paper developed an innovative method for estimation of aerodynamic roughness length (z0m over farmland with a new vegetation index, the Hot-darkspot Vegetation Index (HDVI. To obtain this new index, the normalized-difference hot-darkspot index (NDHD is introduced using a semi-empirical, kernel-driven bidirectional reflectance model with multi-temporal Proba-V 300-m top-of-canopy (TOC reflectance products. A linear relationship between HDVI and z0m was found during the crop growth period. Wind profiles data from two field automatic weather station (AWS were used to calibrate the model: one site is in Guantao County in Hai Basin, in which double-cropping systems and crop rotations with summer maize and winter wheat are implemented; the other is in the middle reach of the Heihe River Basin from the Heihe Watershed Allied Telemetry Experimental Research (HiWATER project, with the main crop of spring maize. The iterative algorithm based on Monin–Obukhov similarity theory is employed to calculate the field z0m from time series. Results show that the relationship between HDVI and z0m is more pronounced than that between NDVI and z0m for spring maize at Yingke site, with an R2 value that improved from 0.636 to 0.772. At Guantao site, HDVI also exhibits better performance than NDVI, with R2 increasing from 0.630 to 0.793 for summer maize and from 0.764 to 0.790 for winter wheat. HDVI can capture the impacts of crop residue on z0m, whereas NDVI cannot.

  4. Estimation of endogenous faecal calcium in buffalo (BOS bubalis) by isotope dilution technique

    International Nuclear Information System (INIS)

    Singh, S.; Sareen, V.K.; Marwaha, S.R.; Sekhon, B.; Bhatia, I.S.

    1973-01-01

    Detailed investigations on the isotope-dilution technique for the estimation of endogenous faecal calcium were conducted with buffalo calves fed on growing ration. The ration consisted of wheat straw, green lucerne and concentrate mix. The endogenous faecal calcium was 3.71 g/day, which is 17.8 percent of the total faecal calcium. The apparent and true digestibilities of Ca were calculated as 51 and 60 percent respectively. The endogenous faecal calcium can be estimated in buffalo calves by giving single subcutaneous injection of Ca 45 and collecting blood samples on 12th and 21st days only, and representative sample from the faeces collected from 13th through 22nd day after the injection. (author)

  5. A new technique for fire risk estimation in the wildland urban interface

    Science.gov (United States)

    Dasgupta, S.; Qu, J. J.; Hao, X.

    A novel technique based on the physical variable of pre-ignition energy is proposed for assessing fire risk in the Grassland-Urban-Interface The physical basis lends meaning a site and season independent applicability possibilities for computing spread rates and ignition probabilities features contemporary fire risk indices usually lack The method requires estimates of grass moisture content and temperature A constrained radiative-transfer inversion scheme on MODIS NIR-SWIR reflectances which reduces solution ambiguity is used for grass moisture retrieval while MODIS land surface temperature emissivity products are used for retrieving grass temperature Subpixel urban contamination of the MODIS reflective and thermal signals over a Grassland-Urban-Interface pixel is corrected using periodic estimates of urban influence from high spatial resolution ASTER

  6. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

    2015-12-31

    Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

  7. Application of genetic algorithm (GA) technique on demand estimation of fossil fuels in Turkey

    International Nuclear Information System (INIS)

    Canyurt, Olcay Ersel; Ozturk, Harun Kemal

    2008-01-01

    The main objective is to investigate Turkey's fossil fuels demand, projection and supplies by using the structure of the Turkish industry and economic conditions. This study develops scenarios to analyze fossil fuels consumption and makes future projections based on a genetic algorithm (GA). The models developed in the nonlinear form are applied to the coal, oil and natural gas demand of Turkey. Genetic algorithm demand estimation models (GA-DEM) are developed to estimate the future coal, oil and natural gas demand values based on population, gross national product, import and export figures. It may be concluded that the proposed models can be used as alternative solutions and estimation techniques for the future fossil fuel utilization values of any country. In the study, coal, oil and natural gas consumption of Turkey are projected. Turkish fossil fuel demand is increased dramatically. Especially, coal, oil and natural gas consumption values are estimated to increase almost 2.82, 1.73 and 4.83 times between 2000 and 2020. In the figures GA-DEM results are compared with World Energy Council Turkish National Committee (WECTNC) projections. The observed results indicate that WECTNC overestimates the fossil fuel consumptions. (author)

  8. An experimental result of estimating an application volume by machine learning techniques.

    Science.gov (United States)

    Hasegawa, Tatsuhito; Koshino, Makoto; Kimura, Haruhiko

    2015-01-01

    In this study, we improved the usability of smartphones by automating a user's operations. We developed an intelligent system using machine learning techniques that periodically detects a user's context on a smartphone. We selected the Android operating system because it has the largest market share and highest flexibility of its development environment. In this paper, we describe an application that automatically adjusts application volume. Adjusting the volume can be easily forgotten because users need to push the volume buttons to alter the volume depending on the given situation. Therefore, we developed an application that automatically adjusts the volume based on learned user settings. Application volume can be set differently from ringtone volume on Android devices, and these volume settings are associated with each specific application including games. Our application records a user's location, the volume setting, the foreground application name and other such attributes as learning data, thereby estimating whether the volume should be adjusted using machine learning techniques via Weka.

  9. Location estimation in wireless sensor networks using spring-relaxation technique.

    Science.gov (United States)

    Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M

    2010-01-01

    Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  10. Location Estimation in Wireless Sensor Networks Using Spring-Relaxation Technique

    Directory of Open Access Journals (Sweden)

    Qing Zhang

    2010-05-01

    Full Text Available Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN. Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  11. Comparison of Available Bandwidth Estimation Techniques in Packet-Switched Mobile Networks

    DEFF Research Database (Denmark)

    López Villa, Dimas; Ubeda Castellanos, Carlos; Teyeb, Oumer Mohammed

    2006-01-01

    The relative contribution of the transport network towards the per-user capacity in mobile telecommunication systems is becoming very important due to the ever increasing air-interface data rates. Thus, resource management procedures such as admission, load and handover control can make use...... of information regarding the available bandwidth in the transport network, as it could end up being the bottleneck rather than the air interface. This paper provides a comparative study of three well known available bandwidth estimation techniques, i.e. TOPP, SLoPS and pathChirp, taking into account...

  12. A Cost Estimation Analysis of U.S. Navy Ship Fuel-Savings Techniques and Technologies

    Science.gov (United States)

    2009-09-01

    Horngren , C. T., Datar, S . M., & Foster, G. (2006). Cost Accounting : A Managerial Emphasis. 12th ed. Saddle River, NJ: Pearson...COVERED Master’s Thesis 4. TITLE AND SUBTITLE A Cost Estimation Analysis of U.S. Navy Ship Fuel-Savings Techniques and Technologies 6. AUTHOR( S ...FY12 FY13 FY14 FY15 FY16 FY17 FY18 N P V   C u m   S a v i n g s   ( $ / y r / S D   s h i p s ) Time

  13. Incorrectly Interpreting the Carbon Mass Balance Technique Leads to Biased Emissions Estimates from Global Vegetation Fires

    Science.gov (United States)

    Surawski, N. C.; Sullivan, A. L.; Roxburgh, S. H.; Meyer, M.; Polglase, P. J.

    2016-12-01

    Vegetation fires are a complex phenomenon and have a range of global impacts including influences on climate. Even though fire is a necessary disturbance for the maintenance of some ecosystems, a range of anthropogenically deleterious consequences are associated with it, such as damage to assets and infrastructure, loss of life, as well as degradation to air quality leading to negative impacts on human health. Estimating carbon emissions from fire relies on a carbon mass balance technique which has evolved with two different interpretations in the fire emissions community. Databases reporting global fire emissions estimates use an approach based on `consumed biomass' which is an approximation to the biogeochemically correct `burnt carbon' approach. Disagreement between the two methods occurs because the `consumed biomass' accounting technique assumes that all burnt carbon is volatilized and emitted. By undertaking a global review of the fraction of burnt carbon emitted to the atmosphere, we show that the `consumed biomass' accounting approach overestimates global carbon emissions by 4.0%, or 100 Teragrams, annually. The required correction is significant and represents 9% of the net global forest carbon sink estimated annually. To correctly partition burnt carbon between that emitted to the atmosphere and that remaining as a post-fire residue requires the post-burn carbon content to be estimated, which is quite often not undertaken in atmospheric emissions studies. To broaden our understanding of ecosystem carbon fluxes, it is recommended that the change in carbon content associated with burnt residues be accounted for. Apart from correctly partitioning burnt carbon between the emitted and residue pools, it enables an accounting approach which can assess the efficacy of fire management operations targeted at sequestering carbon from fire. These findings are particularly relevant for the second commitment period for the Kyoto protocol, since improved landscape fire

  14. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    Science.gov (United States)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  15. Field Application of Cable Tension Estimation Technique Using the h-SI Method

    Directory of Open Access Journals (Sweden)

    Myung-Hyun Noh

    2015-01-01

    Full Text Available This paper investigates field applicability of a new system identification technique of estimating tensile force for a cable of long span bridges. The newly proposed h-SI method using the combination of the sensitivity updating algorithm and the advanced hybrid microgenetic algorithm can allow not only avoiding the trap of local minimum at initial searching stage but also finding the optimal solution in terms of better numerical efficiency than existing methods. First, this paper overviews the procedure of tension estimation through a theoretical formulation. Secondly, the validity of the proposed technique is numerically examined using a set of dynamic data obtained from benchmark numerical samples considering the effect of sag extensibility and bending stiffness of a sag-cable system. Finally, the feasibility of the proposed method is investigated through actual field data extracted from a cable-stayed Seohae Bridge. The test results show that the existing methods require precise initial data in advance but the proposed method is not affected by such initial information. In particular, the proposed method can improve accuracy and convergence rate toward final values. Consequently, the proposed method can be more effective than existing methods in terms of characterizing the tensile force variation for cable structures.

  16. Accounting for estimated IQ in neuropsychological test performance with regression-based techniques.

    Science.gov (United States)

    Testa, S Marc; Winicki, Jessica M; Pearlson, Godfrey D; Gordon, Barry; Schretlen, David J

    2009-11-01

    Regression-based normative techniques account for variability in test performance associated with multiple predictor variables and generate expected scores based on algebraic equations. Using this approach, we show that estimated IQ, based on oral word reading, accounts for 1-9% of the variability beyond that explained by individual differences in age, sex, race, and years of education for most cognitive measures. These results confirm that adding estimated "premorbid" IQ to demographic predictors in multiple regression models can incrementally improve the accuracy with which regression-based norms (RBNs) benchmark expected neuropsychological test performance in healthy adults. It remains to be seen whether the incremental variance in test performance explained by estimated "premorbid" IQ translates to improved diagnostic accuracy in patient samples. We describe these methods, and illustrate the step-by-step application of RBNs with two cases. We also discuss the rationale, assumptions, and caveats of this approach. More broadly, we note that adjusting test scores for age and other characteristics might actually decrease the accuracy with which test performance predicts absolute criteria, such as the ability to drive or live independently.

  17. Using Length of Stay to Control for Unobserved Heterogeneity When Estimating Treatment Effect on Hospital Costs with Observational Data: Issues of Reliability, Robustness, and Usefulness.

    Science.gov (United States)

    May, Peter; Garrido, Melissa M; Cassel, J Brian; Morrison, R Sean; Normand, Charles

    2016-10-01

    To evaluate the sensitivity of treatment effect estimates when length of stay (LOS) is used to control for unobserved heterogeneity when estimating treatment effect on cost of hospital admission with observational data. We used data from a prospective cohort study on the impact of palliative care consultation teams (PCCTs) on direct cost of hospital care. Adult patients with an advanced cancer diagnosis admitted to five large medical and cancer centers in the United States between 2007 and 2011 were eligible for this study. Costs were modeled using generalized linear models with a gamma distribution and a log link. We compared variability in estimates of PCCT impact on hospitalization costs when LOS was used as a covariate, as a sample parameter, and as an outcome denominator. We used propensity scores to account for patient characteristics associated with both PCCT use and total direct hospitalization costs. We analyzed data from hospital cost databases, medical records, and questionnaires. Our propensity score weighted sample included 969 patients who were discharged alive. In analyses of hospitalization costs, treatment effect estimates are highly sensitive to methods that control for LOS, complicating interpretation. Both the magnitude and significance of results varied widely with the method of controlling for LOS. When we incorporated intervention timing into our analyses, results were robust to LOS-controls. Treatment effect estimates using LOS-controls are not only suboptimal in terms of reliability (given concerns over endogeneity and bias) and usefulness (given the need to validate the cost-effectiveness of an intervention using overall resource use for a sample defined at baseline) but also in terms of robustness (results depend on the approach taken, and there is little evidence to guide this choice). To derive results that minimize endogeneity concerns and maximize external validity, investigators should match and analyze treatment and comparison arms

  18. Utilization the nuclear techniques use to estimate the water erosion in tobacco plantations in Cuba

    International Nuclear Information System (INIS)

    Gil, Reinaldo H.; Peralta, José L.; Carrazana, Jorge; Fleitas, Gema; Aguilar, Yulaidis; Rivero, Mario; Morejón, Yilian M.; Oliveira, Jorge

    2015-01-01

    Soil erosion is a relevant factor in land degradation, causing several negative impacts to different levels in the environment, agriculture, etc. The tobacco plantations in the western part of the country have been negatively affected by the water erosion due to natural and human factors. For the implementation of a strategy for sustainable land management a key element is to quantify the soil losses in order to establish policies for soil conservation. The nuclear techniques have advantages in comparison with the traditional methods to assess soil erosion and have been applied in different agricultural settings worldwide. The tobacco cultivation in Pinar del Río is placed on soils with high erosion levels, therefore is important to apply techniques which support the soil erosion rate quantification. This work shows the use of "1"3"7Cs technique to characterize the soil erosion status in two sectors in a farm with tobacco plantations located in the south-western plain of Pinar del Rio province. The sampling strategy included the evaluation of selected transects in the slope direction for the studied site. The soil samples were collected in order to incorporate the whole "1"3"7Cs profile. Different conversion models were applied and the Mass Balance Model II provided the more representative results, estimating the soil erosion rate from –18,28 to 8,15 t ha"-"1año"-"1. (author)

  19. Apple fruit diameter and length estimation by using the thermal and sunshine hours approach and its application to the digital orchard management information system.

    Science.gov (United States)

    Li, Ming; Chen, Meixiang; Zhang, Yong; Fu, Chunxia; Xing, Bin; Li, Wenyong; Qian, Jianping; Li, Sha; Wang, Hui; Fan, Xiaodan; Yan, Yujing; Wang, Yan'an; Yang, Xinting

    2015-01-01

    In apple cultivation, simulation models may be used to monitor fruit size during the growth and development process to predict production levels and to optimize fruit quality. Here, Fuji apples cultivated in spindle-type systems were used as the model crop. Apple size was measured during the growing period at an interval of about 20 days after full bloom, with three weather stations being used to collect orchard temperature and solar radiation data at different sites. Furthermore, a 2-year dataset (2011 and 2012) of apple fruit size measurements were integrated according to the weather station deployment sites, in addition to the top two most important environment factors, thermal and sunshine hours, into the model. The apple fruit diameter and length were simulated using physiological development time (PDT), an indicator that combines important environment factors, such as temperature and photoperiod, as the driving variable. Compared to the model of calendar-based development time (CDT), an indicator counting the days that elapse after full bloom, we confirmed that the PDT model improved the estimation accuracy to within 0.2 cm for fruit diameter and 0.1 cm for fruit length in independent years using a similar data collection method in 2013. The PDT model was implemented to realize a web-based management information system for a digital orchard, and the digital system had been applied in Shandong Province, China since 2013. This system may be used to compute the dynamic curve of apple fruit size based on data obtained from a nearby weather station. This system may provide an important decision support for farmers using the website and short message service to optimize crop production and, hence, economic benefit.

  20. Apple fruit diameter and length estimation by using the thermal and sunshine hours approach and its application to the digital orchard management information system.

    Directory of Open Access Journals (Sweden)

    Ming Li

    Full Text Available In apple cultivation, simulation models may be used to monitor fruit size during the growth and development process to predict production levels and to optimize fruit quality. Here, Fuji apples cultivated in spindle-type systems were used as the model crop. Apple size was measured during the growing period at an interval of about 20 days after full bloom, with three weather stations being used to collect orchard temperature and solar radiation data at different sites. Furthermore, a 2-year dataset (2011 and 2012 of apple fruit size measurements were integrated according to the weather station deployment sites, in addition to the top two most important environment factors, thermal and sunshine hours, into the model. The apple fruit diameter and length were simulated using physiological development time (PDT, an indicator that combines important environment factors, such as temperature and photoperiod, as the driving variable. Compared to the model of calendar-based development time (CDT, an indicator counting the days that elapse after full bloom, we confirmed that the PDT model improved the estimation accuracy to within 0.2 cm for fruit diameter and 0.1 cm for fruit length in independent years using a similar data collection method in 2013. The PDT model was implemented to realize a web-based management information system for a digital orchard, and the digital system had been applied in Shandong Province, China since 2013. This system may be used to compute the dynamic curve of apple fruit size based on data obtained from a nearby weather station. This system may provide an important decision support for farmers using the website and short message service to optimize crop production and, hence, economic benefit.

  1. Fundamental length and relativistic length

    International Nuclear Information System (INIS)

    Strel'tsov, V.N.

    1988-01-01

    It si noted that the introduction of fundamental length contradicts the conventional representations concerning the contraction of the longitudinal size of fast-moving objects. The use of the concept of relativistic length and the following ''elongation formula'' permits one to solve this problem

  2. Estimation of Apple Volume and Its Shape Indentation Using Image Processing Technique and Neural Network

    Directory of Open Access Journals (Sweden)

    M Jafarlou

    2014-04-01

    Full Text Available Physical properties of agricultural products such as volume are the most important parameters influencing grading and packaging systems. They should be measured accurately as they are considered for any good system design. Image processing and neural network techniques are both non-destructive and useful methods which are recently used for such purpose. In this study, the images of apples were captured from a constant distance and then were processed in MATLAB software and the edges of apple images were extracted. The interior area of apple image was divided into some thin trapezoidal elements perpendicular to longitudinal axis. Total volume of apple was estimated by the summation of incremental volumes of these elements revolved around the apple’s longitudinal axis. The picture of half cut apple was also captured in order to obtain the apple shape’s indentation volume, which was subtracted from the previously estimated total volume of apple. The real volume of apples was measured using water displacement method and the relation between the real volume and estimated volume was obtained. The t-test and Bland-Altman indicated that the difference between the real volume and the estimated volume was not significantly different (p>0.05 i.e. the mean difference was 1.52 cm3 and the accuracy of measurement was 92%. Utilizing neural network with input variables of dimension and mass has increased the accuracy up to 97% and the difference between the mean of volumes decreased to 0.7 cm3.

  3. Efficient Ensemble State-Parameters Estimation Techniques in Ocean Ecosystem Models: Application to the North Atlantic

    Science.gov (United States)

    El Gharamti, M.; Bethke, I.; Tjiputra, J.; Bertino, L.

    2016-02-01

    Given the recent strong international focus on developing new data assimilation systems for biological models, we present in this comparative study the application of newly developed state-parameters estimation tools to an ocean ecosystem model. It is quite known that the available physical models are still too simple compared to the complexity of the ocean biology. Furthermore, various biological parameters remain poorly unknown and hence wrong specifications of such parameters can lead to large model errors. Standard joint state-parameters augmentation technique using the ensemble Kalman filter (Stochastic EnKF) has been extensively tested in many geophysical applications. Some of these assimilation studies reported that jointly updating the state and the parameters might introduce significant inconsistency especially for strongly nonlinear models. This is usually the case for ecosystem models particularly during the period of the spring bloom. A better handling of the estimation problem is often carried out by separating the update of the state and the parameters using the so-called Dual EnKF. The dual filter is computationally more expensive than the Joint EnKF but is expected to perform more accurately. Using a similar separation strategy, we propose a new EnKF estimation algorithm in which we apply a one-step-ahead smoothing to the state. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. Unlike the classical filtering path, the new scheme starts with an update step and later a model propagation step is performed. We test the performance of the new smoothing-based schemes against the standard EnKF in a one-dimensional configuration of the Norwegian Earth System Model (NorESM) in the North Atlantic. We use nutrients profile (up to 2000 m deep) data and surface partial CO2 measurements from Mike weather station (66o N, 2o E) to estimate

  4. Flame Length

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — Flame length was modeled using FlamMap, an interagency fire behavior mapping and analysis program that computes potential fire behavior characteristics. The tool...

  5. The accelerator contains the tritium to discard the spirit decontamination system technique the index sigens estimation

    International Nuclear Information System (INIS)

    Yang Haisu

    2005-10-01

    According to the basic demand of the accelerator application, the main properties and technological constants about building purificatory system of exhaust gas mixed with T (tritium) are analysed and estimated in detail. The system can be operated on the high-flux neutron produce instrument. The vent amount of exhaust gas mixed with T exceeds 4 m 3 /d. The maximal consistency of T approximately is 1 x 10 12 Bq/m 3 , so the parameter of eliminating T should exceed 1 x 10 3 . To adopt the purificatory technique included catalyzing, oxidation, molecular filtration and adsorption is suggested, which is widely used in inland and oversea. In structure, the in and out strobe and a full-automatic intellectual control plus man-control three rank tandem purifying system are adopted to the T density on-line supervisory control devices. (authors)

  6. Estimation of trace elements in some anti-diabetic medicinal plants using PIXE technique

    International Nuclear Information System (INIS)

    Naga Raju, G.J.; Sarita, P.; Ramana Murty, G.A.V.; Ravi Kumar, M.; Seetharami Reddy, B.; John Charles, M.; Lakshminarayana, S.; Seshi Reddy, T.; Reddy, S. Bhuloka; Vijayan, V.

    2006-01-01

    Trace elemental analysis was carried out in various parts of some anti-diabetic medicinal plants using PIXE technique. A 3 MeV proton beam was used to excite the samples. The elements Cl, K, Ca, Ti, Cr, Mn, Fe, Ni, Cu, Zn, Br, Rb and Sr were identified and their concentrations were estimated. The results of the present study provide justification for the usage of these medicinal plants in the treatment of diabetes mellitus (DM) since they are found to contain appreciable amounts of the elements K, Ca, Cr, Mn, Cu, and Zn, which are responsible for potentiating insulin action. Our results show that the analyzed medicinal plants can be considered as potential sources for providing a reasonable amount of the required elements other than diet to the patients of DM. Moreover, these results can be used to set new standards for prescribing the dosage of the herbal drugs prepared from these plant materials

  7. 40Ar-39Ar method for age estimation: principles, technique and application in orogenic regions

    International Nuclear Information System (INIS)

    Dalmejer, R.

    1984-01-01

    A variety of the K-Ar method for age estimation by 40 Ar/ 39 Ar recently developed is described. This method doesn't require direct analysis of potassium, its content is calculated as a function of 39 Ar, which is formed from 39 K under neutron activation. Errors resulted from interactions between potassium and calcium nuclei with neutrons are considered. The attention is paid to the technique of gradual heating, used in 40 Ar- 39 Ar method, and of obtaining age spectrum. Aplicabilities of isochronous diagram is discussed for the case of presence of excessive argon in a sample. Examples of 40 Ar- 39 Ar method application for dating events in orogenic regions are presented

  8. A pilot study of a simple screening technique for estimation of salivary flow.

    Science.gov (United States)

    Kanehira, Takashi; Yamaguchi, Tomotaka; Takehara, Junji; Kashiwazaki, Haruhiko; Abe, Takae; Morita, Manabu; Asano, Kouzo; Fujii, Yoshinori; Sakamoto, Wataru

    2009-09-01

    The purpose of this study was to develop a simple screening technique for estimation of salivary flow and to test the usefulness of the method for determining decreased salivary flow. A novel assay system comprising 3 spots containing 30 microg starch and 49.6 microg potassium iodide per spot on filter paper and a coloring reagent, based on the color reaction of iodine-starch and theory of paper chromatography, was designed. We investigated the relationship between resting whole salivary rates and the number of colored spots on the filter produced by 41 hospitalized subjects. A significant negative correlation was observed between the number of colored spots and the resting salivary flow rate (n = 41; r = -0.803; P bedridden and disabled elderly people.

  9. Estimation of gastric emptying time (GET) in clownfish (Amphiprion ocellaris) using X-radiography technique

    Energy Technology Data Exchange (ETDEWEB)

    Ling, Khoo Mei; Ghaffar, Mazlan Abd. [School of Environmental and Natural Resource Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor (Malaysia)

    2014-09-03

    This study examines the movement of food item and the estimation of gastric emptying time using the X-radiography techniques, in the clownfish (Amphiprion ocellaris) fed in captivity. Fishes were voluntarily fed to satiation after being deprived of food for 72 hours, using pellets that were tampered with barium sulphate (BaSO{sub 4}). The movement of food item was monitored over different time of feeding. As a result, a total of 36 hours were needed for the food items to be evacuated completely from the stomach. Results on the modeling of meal satiation were also discussed. The size of satiation meal to body weight relationship was allometric, with the power value equal to 1.28.

  10. A Semester-Long Project for Teaching Basic Techniques in Molecular Biology Such as Restriction Fragment Length Polymorphism Analysis to Undergraduate and Graduate Students

    Science.gov (United States)

    DiBartolomeis, Susan M.

    2011-01-01

    Several reports on science education suggest that students at all levels learn better if they are immersed in a project that is long term, yielding results that require analysis and interpretation. I describe a 12-wk laboratory project suitable for upper-level undergraduates and first-year graduate students, in which the students molecularly locate and map a gene from Drosophila melanogaster called dusky and one of dusky's mutant alleles. The mapping strategy uses restriction fragment length polymorphism analysis; hence, students perform most of the basic techniques of molecular biology (DNA isolation, restriction enzyme digestion and mapping, plasmid vector subcloning, agarose and polyacrylamide gel electrophoresis, DNA labeling, and Southern hybridization) toward the single goal of characterizing dusky and the mutant allele dusky73. Students work as individuals, pairs, or in groups of up to four students. Some exercises require multitasking and collaboration between groups. Finally, results from everyone in the class are required for the final analysis. Results of pre- and postquizzes and surveys indicate that student knowledge of appropriate topics and skills increased significantly, students felt more confident in the laboratory, and students found the laboratory project interesting and challenging. Former students report that the lab was useful in their careers. PMID:21364104

  11. A Technique for Estimating Intensity of Emotional Expressions and Speaking Styles in Speech Based on Multiple-Regression HSMM

    Science.gov (United States)

    Nose, Takashi; Kobayashi, Takao

    In this paper, we propose a technique for estimating the degree or intensity of emotional expressions and speaking styles appearing in speech. The key idea is based on a style control technique for speech synthesis using a multiple regression hidden semi-Markov model (MRHSMM), and the proposed technique can be viewed as the inverse of the style control. In the proposed technique, the acoustic features of spectrum, power, fundamental frequency, and duration are simultaneously modeled using the MRHSMM. We derive an algorithm for estimating explanatory variables of the MRHSMM, each of which represents the degree or intensity of emotional expressions and speaking styles appearing in acoustic features of speech, based on a maximum likelihood criterion. We show experimental results to demonstrate the ability of the proposed technique using two types of speech data, simulated emotional speech and spontaneous speech with different speaking styles. It is found that the estimated values have correlation with human perception.

  12. Right ventricular volume estimation with cine MRI; A comparative study between Simpson's rule and a new modified area-length method

    Energy Technology Data Exchange (ETDEWEB)

    Sawachika, Takashi (Yamaguchi Univ., Ube (Japan). School of Medicine)

    1993-04-01

    To quantitate right ventricular (RV) volumes easily using cine MRI, we developed a new method called 'modified area-length method (MOAL method)'. To validate this method, we compared it to the conventional Simpson's rule. Magnetom H15 (Siemens) was used and 6 normal volunteers and 21 patients with various RV sizes were imaged with ECG triggered gradient echo method (FISP, TR 50 ms, TE 12 ms, slice thickness 9 mm). For Simpson's rule transverse images of 12 sequential views which cover whole heart were acquired. For the MOAL method, two orthogonal views were imaged. One was the sagittal view which includes RV outflow tract and the other was the coronal view defined from the sagittal image to cover the whole RV. From these images the area (As, Ac) of RV and the longest distance between RV apex and pulmonary valve (Lmax) were determined. By correlating RV volumes measured by Simpson's rule to As*Ac/Lmax the RV volume could be estimated as follows: V=0.85*As*Ac/Lmax+4.55. Thus the MOAL method demonstrated excellent accuracy to quantitate RV volume and the acquisition time abbreviated to one fifth compared with Simpson's rule. This should be a highly promising method for routine clinical application. (author).

  13. Use of environmental isotope tracer and GIS techniques to estimate basin recharge

    Science.gov (United States)

    Odunmbaku, Abdulganiu A. A.

    The extensive use of ground water only began with the advances in pumping technology at the early portion of 20th Century. Groundwater provides the majority of fresh water supply for municipal, agricultural and industrial uses, primarily because of little to no treatment it requires. Estimating the volume of groundwater available in a basin is a daunting task, and no accurate measurements can be made. Usually water budgets and simulation models are primarily used to estimate the volume of water in a basin. Precipitation, land surface cover and subsurface geology are factors that affect recharge; these factors affect percolation which invariably affects groundwater recharge. Depending on precipitation, soil chemistry, groundwater chemical composition, gradient and depth, the age and rate of recharge can be estimated. This present research proposes to estimate the recharge in Mimbres, Tularosa and Diablo Basin using the chloride environmental isotope; chloride mass-balance approach and GIS. It also proposes to determine the effect of elevation on recharge rate. Mimbres and Tularosa Basin are located in southern New Mexico State, and extend southward into Mexico. Diablo Basin is located in Texas in extends southward. This research utilizes the chloride mass balance approach to estimate the recharge rate through collection of groundwater data from wells, and precipitation. The data were analysed statistically to eliminate duplication, outliers, and incomplete data. Cluster analysis, piper diagram and statistical significance were performed on the parameters of the groundwater; the infiltration rate was determined using chloride mass balance technique. The data was then analysed spatially using ArcGIS10. Regions of active recharge were identified in Mimbres and Diablo Basin, but this could not be clearly identified in Tularosa Basin. CMB recharge for Tularosa Basin yields 0.04037mm/yr (0.0016in/yr), Diablo Basin was 0.047mm/yr (0.0016 in/yr), and 0.2153mm/yr (0.00848in

  14. Estimating the vibration level of an L-shaped beam using power flow techniques

    Science.gov (United States)

    Cuschieri, J. M.; Mccollum, M.; Rassineux, J. L.; Gilbert, T.

    1986-01-01

    The response of one component of an L-shaped beam, with point force excitation on the other component, is estimated using the power flow method. The transmitted power from the source component to the receiver component is expressed in terms of the transfer and input mobilities at the excitation point and the joint. The response is estimated both in narrow frequency bands, using the exact geometry of the beams, and as a frequency averaged response using infinite beam models. The results using this power flow technique are compared to the results obtained using finite element analysis (FEA) of the L-shaped beam for the low frequency response and to results obtained using statistical energy analysis (SEA) for the high frequencies. The agreement between the FEA results and the power flow method results at low frequencies is very good. SEA results are in terms of frequency averaged levels and these are in perfect agreement with the results obtained using the infinite beam models in the power flow method. The narrow frequency band results from the power flow method also converge to the SEA results at high frequencies. The advantage of the power flow method is that detail of the response can be retained while reducing computation time, which will allow the narrow frequency band analysis of the response to be extended to higher frequencies.

  15. Heart Failure: Diagnosis, Severity Estimation and Prediction of Adverse Events Through Machine Learning Techniques

    Directory of Open Access Journals (Sweden)

    Evanthia E. Tripoliti

    Full Text Available Heart failure is a serious condition with high prevalence (about 2% in the adult population in developed countries, and more than 8% in patients older than 75 years. About 3–5% of hospital admissions are linked with heart failure incidents. Heart failure is the first cause of admission by healthcare professionals in their clinical practice. The costs are very high, reaching up to 2% of the total health costs in the developed countries. Building an effective disease management strategy requires analysis of large amount of data, early detection of the disease, assessment of the severity and early prediction of adverse events. This will inhibit the progression of the disease, will improve the quality of life of the patients and will reduce the associated medical costs. Toward this direction machine learning techniques have been employed. The aim of this paper is to present the state-of-the-art of the machine learning methodologies applied for the assessment of heart failure. More specifically, models predicting the presence, estimating the subtype, assessing the severity of heart failure and predicting the presence of adverse events, such as destabilizations, re-hospitalizations, and mortality are presented. According to the authors' knowledge, it is the first time that such a comprehensive review, focusing on all aspects of the management of heart failure, is presented. Keywords: Heart failure, Diagnosis, Prediction, Severity estimation, Classification, Data mining

  16. Nonparametric statistical techniques used in dose estimation for beagles exposed to inhaled plutonium nitrate

    International Nuclear Information System (INIS)

    Stevens, D.L.; Dagle, G.E.

    1986-01-01

    Retention and translocation of inhaled radionuclides are often estimated from the sacrifice of multiple animals at different time points. The data for each time point can be averaged and a smooth curve fitted to the mean values, or a smooth curve may be fitted to the entire data set. However, an analysis based on means may not be the most appropriate if there is substantial variation in the initial amount of the radionuclide inhaled or if the data are subject to outliers. A method has been developed that takes account of these problems. The body burden is viewed as a compartmental system, with the compartments identified with body organs. A median polish is applied to the multiple logistic transform of the compartmental fractions (compartment burden/total burden) at each time point. A smooth function is fitted to the results of the median polish. This technique was applied to data from beagles exposed to an aerosol of 239 Pu(NO 3 ) 4 . Models of retention and translocation for lungs, skeleton, liver, kidneys, and tracheobronchial lymph nodes were developed and used to estimate dose. 4 refs., 3 figs., 4 tabs

  17. The Use of Coupled Code Technique for Best Estimate Safety Analysis of Nuclear Power Plants

    International Nuclear Information System (INIS)

    Bousbia Salah, A.; D'Auria, F.

    2006-01-01

    Issues connected with the thermal-hydraulics and neutronics of nuclear plants still challenge the design, safety and the operation of Light Water nuclear Reactors (LWR). The lack of full understanding of complex mechanisms related to the interaction between these issues imposed the adoption of conservative safety limits. Those safety margins put restrictions on the optimal exploitation of the plants and consequently reduced economic profit of the plant. In the light of the sustained development in computer technology, the possibilities of code capabilities have been enlarged substantially. Consequently, advanced safety evaluations and design optimizations that were not possible few years ago can now be performed. In fact, during the last decades Best Estimate (BE) neutronic and thermal-hydraulic calculations were so far carried out following rather parallel paths with only few interactions between them. Nowadays, it becomes possible to switch to new generation of computational tools, namely, Coupled Code technique. The application of such method is mandatory for the analysis of accident conditions where strong coupling between the core neutronics and the primary circuit thermal-hydraulics, and more especially when asymmetrical processes take place in the core leading to local space-dependent power generation. Through the current study, a demonstration of the maturity level achieved in the calculation of 3-D core performance during complex accident scenarios in NPPs is emphasized. Typical applications are outlined and discussed showing the main features and limitations of this technique. (author)

  18. Estimation of Shie Glacier Surface Movement Using Offset Tracking Technique with Cosmo-Skymed Images

    Science.gov (United States)

    Wang, Q.; Zhou, W.; Fan, J.; Yuan, W.; Li, H.; Sousa, J. J.; Guo, Z.

    2017-09-01

    Movement is one of the most important characteristics of glaciers which can cause serious natural disasters. For this reason, monitoring this massive blocks is a crucial task. Synthetic Aperture Radar (SAR) can operate all day in any weather conditions and the images acquired by SAR contain intensity and phase information, which are irreplaceable advantages in monitoring the surface movement of glaciers. Moreover, a variety of techniques like DInSAR and offset tracking, based on the information of SAR images, could be applied to measure the movement. Sangwang lake, a glacial lake in the Himalayas, has great potentially danger of outburst. Shie glacier is situated at the upstream of the Sangwang lake. Hence, it is significant to monitor Shie glacier surface movement to assess the risk of outburst. In this paper, 6 high resolution COSMO-SkyMed images spanning from August to December, 2016 are applied with offset tracking technique to estimate the surface movement of Shie glacier. The maximum velocity of Shie glacier surface movement is 51 cm/d, which was observed at the end of glacier tongue, and the velocity is correlated with the change of elevation. Moreover, the glacier surface movement in summer is faster than in winter and the velocity decreases as the local temperature decreases. Based on the above conclusions, the glacier may break off at the end of tongue in the near future. The movement results extracted in this paper also illustrate the advantages of high resolution SAR images in monitoring the surface movement of small glaciers.

  19. Estimation the Amount of Oil Palm Trees Production Using Remote Sensing Technique

    Science.gov (United States)

    Fitrianto, A. C.; Tokimatsu, K.; Sufwandika, M.

    2017-12-01

    Currently, fossil fuels were used as the main source of power supply to generate energy including electricity. Depletion in the amount of fossil fuels has been causing the increasing price of crude petroleum and the demand for alternative energy which is renewable and environment-friendly and it is defined from vegetable oils such palm oil, rapeseed and soybean. Indonesia known as the big palm oil producer which is the largest agricultural industry with total harvested oil palm area which is estimated grew until 8.9 million ha in 2015. On the other hand, lack of information about the age of oil palm trees and changes also their spatial distribution is mainly problem for energy planning. This research conducted to estimate fresh fruit bunch (FFB) of oil palm and their distribution using remote sensing technique. Cimulang oil palm plantation was choose as study area. First step, estimated the age of oil palm trees based on their canopy density as the result from Landsat 8 OLI analysis and classified into five class. From this result, we correlated oil palm age with their average FFB production per six months and classified into seed (0-3 years, 0kg), young (4-8 years, 68.77kg), teen (9-14 years, 109.08kg), and mature (14-25 years, 73.91kg). The result from satellite image analysis shows if Cimulang plantation area consist of teen old oil palm trees that it is covers around 81.5% of that area, followed by mature oil palm trees with 18.5% or corresponding to 100 hectares and have total production of FFB every six months around 7,974,787.24 kg.

  20. Basin Visual Estimation Technique (BVET) and Representative Reach Approaches to Wadeable Stream Surveys: Methodological Limitations and Future Directions

    Science.gov (United States)

    Lance R. Williams; Melvin L. Warren; Susan B. Adams; Joseph L. Arvai; Christopher M. Taylor

    2004-01-01

    Basin Visual Estimation Techniques (BVET) are used to estimate abundance for fish populations in small streams. With BVET, independent samples are drawn from natural habitat units in the stream rather than sampling "representative reaches." This sampling protocol provides an alternative to traditional reach-level surveys, which are criticized for their lack...

  1. Fundamental length

    International Nuclear Information System (INIS)

    Pradhan, T.

    1975-01-01

    The concept of fundamental length was first put forward by Heisenberg from purely dimensional reasons. From a study of the observed masses of the elementary particles known at that time, it is sumrised that this length should be of the order of magnitude 1 approximately 10 -13 cm. It was Heisenberg's belief that introduction of such a fundamental length would eliminate the divergence difficulties from relativistic quantum field theory by cutting off the high energy regions of the 'proper fields'. Since the divergence difficulties arise primarily due to infinite number of degrees of freedom, one simple remedy would be the introduction of a principle that limits these degrees of freedom by removing the effectiveness of the waves with a frequency exceeding a certain limit without destroying the relativistic invariance of the theory. The principle can be stated as follows: It is in principle impossible to invent an experiment of any kind that will permit a distintion between the positions of two particles at rest, the distance between which is below a certain limit. A more elegant way of introducing fundamental length into quantum theory is through commutation relations between two position operators. In quantum field theory such as quantum electrodynamics, it can be introduced through the commutation relation between two interpolating photon fields (vector potentials). (K.B.)

  2. Development of Flight-Test Performance Estimation Techniques for Small Unmanned Aerial Systems

    Science.gov (United States)

    McCrink, Matthew Henry

    This dissertation provides a flight-testing framework for assessing the performance of fixed-wing, small-scale unmanned aerial systems (sUAS) by leveraging sub-system models of components unique to these vehicles. The development of the sub-system models, and their links to broader impacts on sUAS performance, is the key contribution of this work. The sub-system modeling and analysis focuses on the vehicle's propulsion, navigation and guidance, and airframe components. Quantification of the uncertainty in the vehicle's power available and control states is essential for assessing the validity of both the methods and results obtained from flight-tests. Therefore, detailed propulsion and navigation system analyses are presented to validate the flight testing methodology. Propulsion system analysis required the development of an analytic model of the propeller in order to predict the power available over a range of flight conditions. The model is based on the blade element momentum (BEM) method. Additional corrections are added to the basic model in order to capture the Reynolds-dependent scale effects unique to sUAS. The model was experimentally validated using a ground based testing apparatus. The BEM predictions and experimental analysis allow for a parameterized model relating the electrical power, measurable during flight, to the power available required for vehicle performance analysis. Navigation system details are presented with a specific focus on the sensors used for state estimation, and the resulting uncertainty in vehicle state. Uncertainty quantification is provided by detailed calibration techniques validated using quasi-static and hardware-in-the-loop (HIL) ground based testing. The HIL methods introduced use a soft real-time flight simulator to provide inertial quality data for assessing overall system performance. Using this tool, the uncertainty in vehicle state estimation based on a range of sensors, and vehicle operational environments is

  3. Estimating Effective Dose of Radiation From Pediatric Cardiac CT Angiography Using a 64-MDCT Scanner: New Conversion Factors Relating Dose-Length Product to Effective Dose.

    Science.gov (United States)

    Trattner, Sigal; Chelliah, Anjali; Prinsen, Peter; Ruzal-Shapiro, Carrie B; Xu, Yanping; Jambawalikar, Sachin; Amurao, Maxwell; Einstein, Andrew J

    2017-03-01

    The purpose of this study is to determine the conversion factors that enable accurate estimation of the effective dose (ED) used for cardiac 64-MDCT angiography performed for children. Anthropomorphic phantoms representative of 1- and 10-year-old children, with 50 metal oxide semiconductor field-effect transistor dosimeters placed in organs, underwent scanning performed using a 64-MDCT scanner with different routine clinical cardiac scan modes and x-ray tube potentials. Organ doses were used to calculate the ED on the basis of weighting factors published in 1991 in International Commission on Radiological Protection (ICRP) publication 60 and in 2007 in ICRP publication 103. The EDs and the scanner-reported dose-length products were used to determine conversion factors for each scan mode. The effect of infant heart rate on the ED and the conversion factors was also assessed. The mean conversion factors calculated using the current definition of ED that appeared in ICRP publication 103 were as follows: 0.099 mSv · mGy -1 · cm -1 , for the 1-year-old phantom, and 0.049 mSv · mGy -1 · cm -1 , for the 10-year-old phantom. These conversion factors were a mean of 37% higher than the corresponding conversion factors calculated using the older definition of ED that appeared in ICRP publication 60. Varying the heart rate did not influence the ED or the conversion factors. Conversion factors determined using the definition of ED in ICRP publication 103 and cardiac, rather than chest, scan coverage suggest that the radiation doses that children receive from cardiac CT performed using a contemporary 64-MDCT scanner are higher than the radiation doses previously reported when older chest conversion factors were used. Additional up-to-date pediatric cardiac CT conversion factors are required for use with other contemporary CT scanners and patients of different age ranges.

  4. ESTIMATION OF SHIE GLACIER SURFACE MOVEMENT USING OFFSET TRACKING TECHNIQUE WITH COSMO-SKYMED IMAGES

    Directory of Open Access Journals (Sweden)

    Q. Wang

    2017-09-01

    Full Text Available Movement is one of the most important characteristics of glaciers which can cause serious natural disasters. For this reason, monitoring this massive blocks is a crucial task. Synthetic Aperture Radar (SAR can operate all day in any weather conditions and the images acquired by SAR contain intensity and phase information, which are irreplaceable advantages in monitoring the surface movement of glaciers. Moreover, a variety of techniques like DInSAR and offset tracking, based on the information of SAR images, could be applied to measure the movement. Sangwang lake, a glacial lake in the Himalayas, has great potentially danger of outburst. Shie glacier is situated at the upstream of the Sangwang lake. Hence, it is significant to monitor Shie glacier surface movement to assess the risk of outburst. In this paper, 6 high resolution COSMO-SkyMed images spanning from August to December, 2016 are applied with offset tracking technique to estimate the surface movement of Shie glacier. The maximum velocity of Shie glacier surface movement is 51 cm/d, which was observed at the end of glacier tongue, and the velocity is correlated with the change of elevation. Moreover, the glacier surface movement in summer is faster than in winter and the velocity decreases as the local temperature decreases. Based on the above conclusions, the glacier may break off at the end of tongue in the near future. The movement results extracted in this paper also illustrate the advantages of high resolution SAR images in monitoring the surface movement of small glaciers.

  5. A time series deformation estimation in the NW Himalayas using SBAS InSAR technique

    Science.gov (United States)

    Kumar, V.; Venkataraman, G.

    2012-12-01

    A time series land deformation studies in north western Himalayan region has been presented in this study. Synthetic aperture radar (SAR) interferometry (InSAR) is an important tool for measuring the land displacement caused by different geological processes [1]. Frequent spatial and temporal decorrelation in the Himalayan region is a strong impediment in precise deformation estimation using conventional interferometric SAR approach. In such cases, advanced DInSAR approaches PSInSAR as well as Small base line subset (SBAS) can be used to estimate earth surface deformation. The SBAS technique [2] is a DInSAR approach which uses a twelve or more number of repeat SAR acquisitions in different combinations of a properly chosen data (subsets) for generation of DInSAR interferograms using two pass interferometric approach. Finally it leads to the generation of mean deformation velocity maps and displacement time series. Herein, SBAS algorithm has been used for time series deformation estimation in the NW Himalayan region. ENVISAT ASAR IS2 swath data from 2003 to 2008 have been used for quantifying slow deformation. Himalayan region is a very active tectonic belt and active orogeny play a significant role in land deformation process [3]. Geomorphology in the region is unique and reacts to the climate change adversely bringing with land slides and subsidence. Settlements on the hill slopes are prone to land slides, landslips, rockslides and soil creep. These hazardous features have hampered the over all progress of the region as they obstruct the roads and flow of traffic, break communication, block flowing water in stream and create temporary reservoirs and also bring down lot of soil cover and thus add enormous silt and gravel to the streams. It has been observed that average deformation varies from -30.0 mm/year to 10 mm/year in the NW Himalayan region . References [1] Massonnet, D., Feigl, K.L.,Rossi, M. and Adragna, F. (1994) Radar interferometry mapping of

  6. Estimation of Crop Coefficient of Corn (Kccorn under Climate Change Scenarios Using Data Mining Technique

    Directory of Open Access Journals (Sweden)

    Kampanad Bhaktikul

    2012-01-01

    Full Text Available The main objectives of this study are to determine the crop coefficient of corn (Kccorn using data mining technique under climate change scenarios, and to develop the guidelines for future water management based on climate change scenarios. Variables including date, maximum temperature, minimum temperature, precipitation, humidity, wind speed, and solar radiation from seven meteorological stations during 1991 to 2000 were used. Cross-Industry Standard Process for Data Mining (CRISP-DM was applied for data collection and analyses. The procedures compose of investigation of input data, model set up using Artificial Neural Networks (ANNs, model evaluation, and finally estimation of the Kccorn. Three climate change scenarios of carbon dioxide (CO2 concentration level: 360 ppm, 540 ppm, and 720 ppm were set. The results indicated that the best number of node of input layer - hidden layer - output layer was 7-13-1. The correlation coefficient of model was 0.99. The predicted Kccorn revealed that evapotranspiration (ETcorn pattern will be changed significantly upon CO2 concentration level. From the model predictions, ETcorn will be decreased 3.34% when CO2 increased from 360 ppm to 540 ppm. For the double CO2 concentration from 360 ppm to 720 ppm, ETcorn will be increased 16.13%. The future water management guidelines to cope with the climate change are suggested.

  7. Wind Turbine Tower Vibration Modeling and Monitoring by the Nonlinear State Estimation Technique (NSET

    Directory of Open Access Journals (Sweden)

    Peng Guo

    2012-12-01

    Full Text Available With appropriate vibration modeling and analysis the incipient failure of key components such as the tower, drive train and rotor of a large wind turbine can be detected. In this paper, the Nonlinear State Estimation Technique (NSET has been applied to model turbine tower vibration to good effect, providing an understanding of the tower vibration dynamic characteristics and the main factors influencing these. The developed tower vibration model comprises two different parts: a sub-model used for below rated wind speed; and another for above rated wind speed. Supervisory control and data acquisition system (SCADA data from a single wind turbine collected from March to April 2006 is used in the modeling. Model validation has been subsequently undertaken and is presented. This research has demonstrated the effectiveness of the NSET approach to tower vibration; in particular its conceptual simplicity, clear physical interpretation and high accuracy. The developed and validated tower vibration model was then used to successfully detect blade angle asymmetry that is a common fault that should be remedied promptly to improve turbine performance and limit fatigue damage. The work also shows that condition monitoring is improved significantly if the information from the vibration signals is complemented by analysis of other relevant SCADA data such as power performance, wind speed, and rotor loads.

  8. Constitutive error based parameter estimation technique for plate structures using free vibration signatures

    Science.gov (United States)

    Guchhait, Shyamal; Banerjee, Biswanath

    2018-04-01

    In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.

  9. Comparison of volatility function technique for risk-neutral densities estimation

    Science.gov (United States)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-08-01

    Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.

  10. A scintillation camera technique for quantitative estimation of separate kidney function and its use before nephrectomy

    International Nuclear Information System (INIS)

    Larsson, I.; Lindstedt, E.; Ohlin, P.; Strand, S.E.; White, T.

    1975-01-01

    A scintillation camera technique was used for measuring renal uptake of [ 131 I]Hippuran 80-110 s after injection. Externally measured Hippuran uptake was markedly influenced by kidney depth, which was measured by lateral-view image after injection of [ 99 Tc]iron ascorbic acid complex or [ 197 Hg]chlormerodrine. When one kidney was nearer to the dorsal surface of the body than the other, it was necessary to correct the externally measured Hippuran uptake for kidney depth to obtain reliable information on the true partition of Hippuran between the two kidneys. In some patients the glomerular filtration rate (GFR) was measured before and after nephrectomy. Measured postoperative GFR was compared with preoperative predicted GFR, which was calculated by multiplying the preoperative Hippuran uptake of the kidney to be left in situ, as a fraction of the preoperative Hippuran uptake of both kidneys, by the measured preoperative GFR. The measured postoperative GFR was usually moderately higher than the preoperatively predicted GFR. The difference could be explained by a postoperative compensatory increase in function of the remaining kidney. Thus, the present method offers a possibility of estimating separate kidney function without arterial or ureteric catheterization. (auth)

  11. Estimation of coronary wave intensity analysis using noninvasive techniques and its application to exercise physiology.

    Science.gov (United States)

    Broyd, Christopher J; Nijjer, Sukhjinder; Sen, Sayan; Petraco, Ricardo; Jones, Siana; Al-Lamee, Rasha; Foin, Nicolas; Al-Bustami, Mahmud; Sethi, Amarjit; Kaprielian, Raffi; Ramrakha, Punit; Khan, Masood; Malik, Iqbal S; Francis, Darrel P; Parker, Kim; Hughes, Alun D; Mikhail, Ghada W; Mayet, Jamil; Davies, Justin E

    2016-03-01

    Wave intensity analysis (WIA) has found particular applicability in the coronary circulation where it can quantify traveling waves that accelerate and decelerate blood flow. The most important wave for the regulation of flow is the backward-traveling decompression wave (BDW). Coronary WIA has hitherto always been calculated from invasive measures of pressure and flow. However, recently it has become feasible to obtain estimates of these waveforms noninvasively. In this study we set out to assess the agreement between invasive and noninvasive coronary WIA at rest and measure the effect of exercise. Twenty-two patients (mean age 60) with unobstructed coronaries underwent invasive WIA in the left anterior descending artery (LAD). Immediately afterwards, noninvasive LAD flow and pressure were recorded and WIA calculated from pulsed-wave Doppler coronary flow velocity and central blood pressure waveforms measured using a cuff-based technique. Nine of these patients underwent noninvasive coronary WIA assessment during exercise. A pattern of six waves were observed in both modalities. The BDW was similar between invasive and noninvasive measures [peak: 14.9 ± 7.8 vs. -13.8 ± 7.1 × 10(4) W·m(-2)·s(-2), concordance correlation coefficient (CCC): 0.73, P Exercise increased the BDW: at maximum exercise peak BDW was -47.0 ± 29.5 × 10(4) W·m(-2)·s(-2) (P Physiological Society.

  12. Measurement of the length and position of the lower oesophageal sphincter by correlation of external measurements and radiographic estimations in dogs

    International Nuclear Information System (INIS)

    Waterman, A.E.; Hashim, M.A.

    1991-01-01

    Fifty dogs were investigated in order to correlate the length and position of the lower oesophageal sphincter (LOS) with external measurements. Various external measurements were taken while the dogs were anaesthetised and positioned in lateral recumbency. An oesophageal tube was then introduced into the oesophagus and thoracic radiographs were taken. The 'real internal length of the oesophagus' was calculated as the length from the lower jaw incisor tooth to the position of the oesophageal tube at the costal border of the diaphragm. A highly significant linear correlation was found between this internal length and the external length from lower jaw incisor tooth to the anterior border of the head of the 10th rib. Using oesophageal manometry, the length and position of the LOS was also studied in 25 clinically normal bitches. The mean length of the LOS was found to be 4.6 +/- 0.92 cm. The position of the LOS was a mean of 4.4 +/- 1.69 cm cranial to the costal border of the diaphragm. The findings of this study indicate that the external measurements can be used to position catheters for accurate oesophageal manometry in the dog

  13. Comparison of Estimation Techniques for Vibro-Acoustic Transfer Path Analysis

    Directory of Open Access Journals (Sweden)

    Paulo Eduardo França Padilha

    2006-01-01

    Full Text Available Vibro-acoustic Transfer Path Analysis (TPA is a tool to evaluate the contribution of different energy propagation paths between a source and a receiver, linked to each other by a number of connections. TPA is typically used to quantify and rank the relative importance of these paths in a given frequency band, determining the most significant one to the receiver. Basically, two quantities have to be determined for TPA: the operational forces at each transfer path and the Frequency Response Functions (FRF of these paths. The FRF are obtained either experimentally or analytically, and the influence of the mechanical impedance of the source can be taken into account or not. The operational forces can be directly obtained from measurements using force transducers or indirectly estimated from auxiliary response measurements. Two methods to obtain the operational forces indirectly – the Complex Stiffness Method (CSM and the Matrix Inversion Method (MIM – associated with two possible configurations to determine the FRF – including and excluding the source impedance – are presented and discussed in this paper. The effect of weak and strong coupling among the paths is also commented considering the techniques previously presented. The main conclusion is that, with the source removed, CSM gives more accurate results. On the other hand, with the source present, MIM is preferable. In the latter case, CSM should be used only if there is a high impedance mismatch between the source and the receiver. Both methods are not affected by a higher or lower degree of coupling among the transfer paths.

  14. Modelling and analysis of ozone concentration by artificial intelligent techniques for estimating air quality

    Science.gov (United States)

    Taylan, Osman

    2017-02-01

    High ozone concentration is an important cause of air pollution mainly due to its role in the greenhouse gas emission. Ozone is produced by photochemical processes which contain nitrogen oxides and volatile organic compounds in the lower atmospheric level. Therefore, monitoring and controlling the quality of air in the urban environment is very important due to the public health care. However, air quality prediction is a highly complex and non-linear process; usually several attributes have to be considered. Artificial intelligent (AI) techniques can be employed to monitor and evaluate the ozone concentration level. The aim of this study is to develop an Adaptive Neuro-Fuzzy inference approach (ANFIS) to determine the influence of peripheral factors on air quality and pollution which is an arising problem due to ozone level in Jeddah city. The concentration of ozone level was considered as a factor to predict the Air Quality (AQ) under the atmospheric conditions. Using Air Quality Standards of Saudi Arabia, ozone concentration level was modelled by employing certain factors such as; nitrogen oxide (NOx), atmospheric pressure, temperature, and relative humidity. Hence, an ANFIS model was developed to observe the ozone concentration level and the model performance was assessed by testing data obtained from the monitoring stations established by the General Authority of Meteorology and Environment Protection of Kingdom of Saudi Arabia. The outcomes of ANFIS model were re-assessed by fuzzy quality charts using quality specification and control limits based on US-EPA air quality standards. The results of present study show that the ANFIS model is a comprehensive approach for the estimation and assessment of ozone level and is a reliable approach to produce more genuine outcomes.

  15. Bi Input-extended Kalman filter based estimation technique for speed-sensorless control of induction motors

    International Nuclear Information System (INIS)

    Barut, Murat

    2010-01-01

    This study offers a novel extended Kalman filter (EKF) based estimation technique for the solution of the on-line estimation problem related to uncertainties in the stator and rotor resistances inherent to the speed-sensorless high efficiency control of induction motors (IMs) in the wide speed range as well as extending the limited number of states and parameter estimations possible with a conventional single EKF algorithm. For this aim, the introduced estimation technique in this work utilizes a single EKF algorithm with the consecutive execution of two inputs derived from the two individual extended IM models based on the stator resistance and rotor resistance estimation, differently from the other approaches in past studies, which require two separate EKF algorithms operating in a switching or braided manner; thus, it has superiority over the previous EKF schemes in this regard. The proposed EKF based estimation technique performing the on-line estimations of the stator currents, the rotor flux, the rotor angular velocity, and the load torque involving the viscous friction term together with the rotor and stator resistance is also used in the combination with the speed-sensorless direct vector control of IM and tested with simulations under the challenging 12 scenarios generated instantaneously via step and/or linear variations of the velocity reference, the load torque, the stator resistance, and the rotor resistance in the range of high and zero speed, assuming that the measured stator phase currents and voltages are available. Even under those variations, the performance of the speed-sensorless direct vector control system established on the novel EKF based estimation technique is observed to be quite good.

  16. Bi Input-extended Kalman filter based estimation technique for speed-sensorless control of induction motors

    Energy Technology Data Exchange (ETDEWEB)

    Barut, Murat, E-mail: muratbarut27@yahoo.co [Nigde University, Department of Electrical and Electronics Engineering, 51245 Nigde (Turkey)

    2010-10-15

    This study offers a novel extended Kalman filter (EKF) based estimation technique for the solution of the on-line estimation problem related to uncertainties in the stator and rotor resistances inherent to the speed-sensorless high efficiency control of induction motors (IMs) in the wide speed range as well as extending the limited number of states and parameter estimations possible with a conventional single EKF algorithm. For this aim, the introduced estimation technique in this work utilizes a single EKF algorithm with the consecutive execution of two inputs derived from the two individual extended IM models based on the stator resistance and rotor resistance estimation, differently from the other approaches in past studies, which require two separate EKF algorithms operating in a switching or braided manner; thus, it has superiority over the previous EKF schemes in this regard. The proposed EKF based estimation technique performing the on-line estimations of the stator currents, the rotor flux, the rotor angular velocity, and the load torque involving the viscous friction term together with the rotor and stator resistance is also used in the combination with the speed-sensorless direct vector control of IM and tested with simulations under the challenging 12 scenarios generated instantaneously via step and/or linear variations of the velocity reference, the load torque, the stator resistance, and the rotor resistance in the range of high and zero speed, assuming that the measured stator phase currents and voltages are available. Even under those variations, the performance of the speed-sensorless direct vector control system established on the novel EKF based estimation technique is observed to be quite good.

  17. Uranium solution mining cost estimating technique: means for rapid comparative analysis of deposits

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    Twelve graphs provide a technique for determining relative cost ranges for uranium solution mining projects. The use of the technique can provide a consistent framework for rapid comparative analysis of various properties of mining situations. The technique is also useful to determine the sensitivities of cost figures to incremental changes in mining factors or deposit characteristics

  18. TU-H-207A-09: An Automated Technique for Estimating Patient-Specific Regional Imparted Energy and Dose From TCM CT Exams Across 13 Protocols

    International Nuclear Information System (INIS)

    Sanders, J; Tian, X; Segars, P; Boone, J; Samei, E

    2016-01-01

    Purpose: To develop an automated technique for estimating patient-specific regional imparted energy and dose from tube current modulated (TCM) computed tomography (CT) exams across a diverse set of head and body protocols. Methods: A library of 58 adult computational anthropomorphic extended cardiac-torso (XCAT) phantoms were used to model a patient population. A validated Monte Carlo program was used to simulate TCM CT exams on the entire library of phantoms for three head and 10 body protocols. The net imparted energy to the phantoms, normalized by dose length product (DLP), and the net tissue mass in each of the scan regions were computed. A knowledgebase containing relationships between normalized imparted energy and scanned mass was established. An automated computer algorithm was written to estimate the scanned mass from actual clinical CT exams. The scanned mass estimate, DLP of the exam, and knowledgebase were used to estimate the imparted energy to the patient. The algorithm was tested on 20 chest and 20 abdominopelvic TCM CT exams. Results: The normalized imparted energy increased with increasing kV for all protocols. However, the normalized imparted energy was relatively unaffected by the strength of the TCM. The average imparted energy was 681 ± 376 mJ for abdominopelvic exams and 274 ± 141 mJ for chest exams. Overall, the method was successful in providing patientspecific estimates of imparted energy for 98% of the cases tested. Conclusion: Imparted energy normalized by DLP increased with increasing tube potential. However, the strength of the TCM did not have a significant effect on the net amount of energy deposited to tissue. The automated program can be implemented into the clinical workflow to provide estimates of regional imparted energy and dose across a diverse set of clinical protocols.

  19. AFSC/RACE/GAP/Palsson: Gulf of Alaska and Aleutian Islands Biennial Bottom Trawl Survey estimates of catch per unit effort, biomass, population at length, and associated tables

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The GOA/AI Bottom Trawl Estimate database contains abundance estimates for the Alaska Biennial Bottom Trawl Surveys conducted in the Gulf of Alaska and the Aleutian...

  20. Techniques for the estimation of global irradiation from sunshine duration and global irradiation estimation for Italian locations

    International Nuclear Information System (INIS)

    Jain, P.C.

    1984-04-01

    Angstrom equation H=H 0 (a+bS/S 0 ) has been fitted using the least-square method to the global irradiation and the sunshine duration data of 31 Italian locations for the duration 1965-1974. Three more linear equations: i) the equation H'=H 0 (a+bS/S 0 ), obtained by incorporating the effect of the multiple reflections between the earth's surface and the atmosphere, ii) the equation H=H 0 (a+bS/S' 0 ), obtained by incorporating the effect of not burning of the sunshine recorder chart when the elevation of the sun is less than 5 deg., and iii) the equation H'=H 0 (a+bS/S' 0 ), obtained by incorporating both the above effects simultaneously, have also each been fitted to the same data. Good correlation with correlation coefficients around 0.9 or more are obtained for most of the locations with all the four equations. Substantial spatial scatter is obtained in the values of the regression parameters. The use of any of the three latter equations does not result in any advantage over that of the simpler Angstrom equation; it neither results in a decrease in the spatial scatter in the values of the regression parameters nor does it yield better correlation. The computed values of the regression parameters in the Angstrom equation yield estimates of the global irradiation that are on the average within +- 4% of the measured values for most of the locations. (author)

  1. Low-complexity DOA estimation from short data snapshots for ULA systems using the annihilating filter technique

    Science.gov (United States)

    Bellili, Faouzi; Amor, Souheib Ben; Affes, Sofiène; Ghrayeb, Ali

    2017-12-01

    This paper addresses the problem of DOA estimation using uniform linear array (ULA) antenna configurations. We propose a new low-cost method of multiple DOA estimation from very short data snapshots. The new estimator is based on the annihilating filter (AF) technique. It is non-data-aided (NDA) and does not impinge therefore on the whole throughput of the system. The noise components are assumed temporally and spatially white across the receiving antenna elements. The transmitted signals are also temporally and spatially white across the transmitting sources. The new method is compared in performance to the Cramér-Rao lower bound (CRLB), the root-MUSIC algorithm, the deterministic maximum likelihood estimator and another Bayesian method developed precisely for the single snapshot case. Simulations show that the new estimator performs well over a wide SNR range. Prominently, the main advantage of the new AF-based method is that it succeeds in accurately estimating the DOAs from short data snapshots and even from a single snapshot outperforming by far the state-of-the-art techniques both in DOA estimation accuracy and computational cost.

  2. A review of statistical estimators for risk-adjusted length of stay: analysis of the Australian and new Zealand Intensive Care Adult Patient Data-Base, 2008-2009.

    Science.gov (United States)

    Moran, John L; Solomon, Patricia J

    2012-05-16

    For the analysis of length-of-stay (LOS) data, which is characteristically right-skewed, a number of statistical estimators have been proposed as alternatives to the traditional ordinary least squares (OLS) regression with log dependent variable. Using a cohort of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 2008-2009, 12 different methods were used for estimation of intensive care (ICU) length of stay. These encompassed risk-adjusted regression analysis of firstly: log LOS using OLS, linear mixed model [LMM], treatment effects, skew-normal and skew-t models; and secondly: unmodified (raw) LOS via OLS, generalised linear models [GLMs] with log-link and 4 different distributions [Poisson, gamma, negative binomial and inverse-Gaussian], extended estimating equations [EEE] and a finite mixture model including a gamma distribution. A fixed covariate list and ICU-site clustering with robust variance were utilised for model fitting with split-sample determination (80%) and validation (20%) data sets, and model simulation was undertaken to establish over-fitting (Copas test). Indices of model specification using Bayesian information criterion [BIC: lower values preferred] and residual analysis as well as predictive performance (R2, concordance correlation coefficient (CCC), mean absolute error [MAE]) were established for each estimator. The data-set consisted of 111663 patients from 131 ICUs; with mean(SD) age 60.6(18.8) years, 43.0% were female, 40.7% were mechanically ventilated and ICU mortality was 7.8%. ICU length-of-stay was 3.4(5.1) (median 1.8, range (0.17-60)) days and demonstrated marked kurtosis and right skew (29.4 and 4.4 respectively). BIC showed considerable spread, from a maximum of 509801 (OLS-raw scale) to a minimum of 210286 (LMM). R2 ranged from 0.22 (LMM) to 0.17 and the CCC from 0.334 (LMM) to 0.149, with MAE 2.2-2.4. Superior residual behaviour was established for the log-scale estimators

  3. Determination of the length and position of the lower oesophageal sphincter (LOS) by correlation of external measurements with combined radiographic and manometric estimations in the cat

    International Nuclear Information System (INIS)

    Hashim, M.A.; Waterman, A.E.

    1992-01-01

    Fifty DSH cats were studied radiographically and a highly significant linear correlation was found between the length of the oesophagus measured to the diaphragmatic line on the radiographs and the externally measured distance from the lower jaw incisor teeth to the anterior border of the head of 10th rib. A subsequent manometric study utilizing this correlation in 40 cats suggests that the functional lower oesophageal sphincter (LOS) is situated almost at the level of the diaphragm in the cat. Significant differences were found between the length of the LOS in cats anaesthetized with ketamine compared to alphaxalone-alphadolone or xylazine-ketamine-atropine. The mean lengths of the LOS was 1.42 +/- 0.3 cm. The findings of this study indicate that external measurements can be used to position catheters for accurate oesophageal manometry in the cat

  4. Estimation of the Coefficient of Restitution of Rocking Systems by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Demosthenous, M.; Manos, G. C.

    The aim of this paper is to investigate the possibility of estimating an average damping parameter for a rocking system due to impact, the so-called coefficient of restitution, from the random response, i.e. when the loads are random and unknown, and the response is measured. The objective is to ...... of freedom system loaded by white noise, estimating the coefficient of restitution as explained, and comparing the estimates with the value used in the simulations. Several estimates for the coefficient of restitution are considered, and reasonable results are achieved....

  5. A novel technique for real-time estimation of edge pedestal density gradients via reflectometer time delay data

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.; Wang, G.; Sung, C.; Peebles, W. A. [Physics and Astronomy Department, University of California, Los Angeles, California 90095 (United States); Bobrek, M. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6006 (United States)

    2016-11-15

    A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layer density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.

  6. Estimation of the impact of manufacturing tolerances on burn-up calculations using Monte Carlo techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bock, M.; Wagner, M. [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH, Garching (Germany). Forschungszentrum

    2012-11-01

    In recent years, the availability of computing resources has increased enormously. There are two ways to take advantage of this increase in analyses in the field of the nuclear fuel cycle, such as burn-up calculations or criticality safety calculations. The first possible way is to improve the accuracy of the models that are analyzed. For burn-up calculations this means, that the goal to model and to calculate the burn-up of a full reactor core is getting more and more into reach. The second way to utilize the resources is to run state-of-the-art programs with simplified models several times, but with varied input parameters. This second way opens the applicability of the assessment of uncertainties and sensitivities based on the Monte Carlo method for fields of research that rely heavily on either high CPU usage or high memory consumption. In the context of the nuclear fuel cycle, applications that belong to these types of demanding analyses are again burn-up and criticality safety calculations. The assessment of uncertainties in burn-up analyses can complement traditional analysis techniques such as best estimate or bounding case analyses and can support the safety analysis in future design decisions, e.g. by analyzing the uncertainty of the decay heat power of the nuclear inventory stored in the spent fuel pool of a nuclear power plant. This contribution concentrates on the uncertainty analysis in burn-up calculations of PWR fuel assemblies. The uncertainties in the results arise from the variation of the input parameters. In this case, the focus is on the one hand on the variation of manufacturing tolerances that are present in the different production stages of the fuel assemblies. On the other hand, uncertainties that describe the conditions during the reactor operation are taken into account. They also affect the results of burn-up calculations. In order to perform uncertainty analyses in burn-up calculations, GRS has improved the capabilities of its general

  7. A Comparison of Regression Techniques for Estimation of Above-Ground Winter Wheat Biomass Using Near-Surface Spectroscopy

    Directory of Open Access Journals (Sweden)

    Jibo Yue

    2018-01-01

    Full Text Available Above-ground biomass (AGB provides a vital link between solar energy consumption and yield, so its correct estimation is crucial to accurately monitor crop growth and predict yield. In this work, we estimate AGB by using 54 vegetation indexes (e.g., Normalized Difference Vegetation Index, Soil-Adjusted Vegetation Index and eight statistical regression techniques: artificial neural network (ANN, multivariable linear regression (MLR, decision-tree regression (DT, boosted binary regression tree (BBRT, partial least squares regression (PLSR, random forest regression (RF, support vector machine regression (SVM, and principal component regression (PCR, which are used to analyze hyperspectral data acquired by using a field spectrophotometer. The vegetation indexes (VIs determined from the spectra were first used to train regression techniques for modeling and validation to select the best VI input, and then summed with white Gaussian noise to study how remote sensing errors affect the regression techniques. Next, the VIs were divided into groups of different sizes by using various sampling methods for modeling and validation to test the stability of the techniques. Finally, the AGB was estimated by using a leave-one-out cross validation with these powerful techniques. The results of the study demonstrate that, of the eight techniques investigated, PLSR and MLR perform best in terms of stability and are most suitable when high-accuracy and stable estimates are required from relatively few samples. In addition, RF is extremely robust against noise and is best suited to deal with repeated observations involving remote-sensing data (i.e., data affected by atmosphere, clouds, observation times, and/or sensor noise. Finally, the leave-one-out cross-validation method indicates that PLSR provides the highest accuracy (R2 = 0.89, RMSE = 1.20 t/ha, MAE = 0.90 t/ha, NRMSE = 0.07, CV (RMSE = 0.18; thus, PLSR is best suited for works requiring high

  8. Winter ecology of the Porcupine caribou herd, Yukon: Part III, Role of day length in determining activity pattern and estimating percent lying

    Directory of Open Access Journals (Sweden)

    D. E. Russell

    1986-06-01

    Full Text Available Data on the activity pattern, proportion of time spent lying and the length of active and lying periods in winter are presented from a 3 year study on the Porcupine caribou herd. Animals were most active at sunrise and sunset resulting in from one (late fall, early and mid winter to two (early fall and late winter to three (spring intervening lying periods. Mean active/lying cycle length decreased from late fall (298 mm to early winter (238 min, increased to a peak in mid winter (340 min then declined in late winter (305 min and again in spring (240 min. Mean length of the lying period increased throughout the 3 winter months from 56 min m early winter to 114 min in mid winter and 153 min in late winter. The percent of the day animals spent lying decreased from fall to early winter, increased throughout the winter and declined in spring. This pattern was related, in part, to day length and was used to compare percent lying among herds. The relationship is suggested to be a means of comparing quality of winter ranges.

  9. Porosity and hydraulic conductivity estimation of the basaltic aquifer in Southern Syria by using nuclear and electrical well logging techniques

    Science.gov (United States)

    Asfahani, Jamal

    2017-08-01

    An alternative approach using nuclear neutron-porosity and electrical resistivity well logging of long (64 inch) and short (16 inch) normal techniques is proposed to estimate the porosity and the hydraulic conductivity ( K) of the basaltic aquifers in Southern Syria. This method is applied on the available logs of Kodana well in Southern Syria. It has been found that the obtained K value by applying this technique seems to be reasonable and comparable with the hydraulic conductivity value of 3.09 m/day obtained by the pumping test carried out at Kodana well. The proposed alternative well logging methodology seems as promising and could be practiced in the basaltic environments for the estimation of hydraulic conductivity parameter. However, more detailed researches are still required to make this proposed technique very performed in basaltic environments.

  10. Estimating Global Seafloor Total Organic Carbon Using a Machine Learning Technique and Its Relevance to Methane Hydrates

    Science.gov (United States)

    Lee, T. R.; Wood, W. T.; Dale, J.

    2017-12-01

    Empirical and theoretical models of sub-seafloor organic matter transformation, degradation and methanogenesis require estimates of initial seafloor total organic carbon (TOC). This subsurface methane, under the appropriate geophysical and geochemical conditions may manifest as methane hydrate deposits. Despite the importance of seafloor TOC, actual observations of TOC in the world's oceans are sparse and large regions of the seafloor yet remain unmeasured. To provide an estimate in areas where observations are limited or non-existent, we have implemented interpolation techniques that rely on existing data sets. Recent geospatial analyses have provided accurate accounts of global geophysical and geochemical properties (e.g. crustal heat flow, seafloor biomass, porosity) through machine learning interpolation techniques. These techniques find correlations between the desired quantity (in this case TOC) and other quantities (predictors, e.g. bathymetry, distance from coast, etc.) that are more widely known. Predictions (with uncertainties) of seafloor TOC in regions lacking direct observations are made based on the correlations. Global distribution of seafloor TOC at 1 x 1 arc-degree resolution was estimated from a dataset of seafloor TOC compiled by Seiter et al. [2004] and a non-parametric (i.e. data-driven) machine learning algorithm, specifically k-nearest neighbors (KNN). Built-in predictor selection and a ten-fold validation technique generated statistically optimal estimates of seafloor TOC and uncertainties. In addition, inexperience was estimated. Inexperience is effectively the distance in parameter space to the single nearest neighbor, and it indicates geographic locations where future data collection would most benefit prediction accuracy. These improved geospatial estimates of TOC in data deficient areas will provide new constraints on methane production and subsequent methane hydrate accumulation.

  11. Sensitivity analysis of a pulse nutrient addition technique for estimating nutrient uptake in large streams

    Science.gov (United States)

    Laurence Lin; J.R. Webster

    2012-01-01

    The constant nutrient addition technique has been used extensively to measure nutrient uptake in streams. However, this technique is impractical for large streams, and the pulse nutrient addition (PNA) has been suggested as an alternative. We developed a computer model to simulate Monod kinetics nutrient uptake in large rivers and used this model to evaluate the...

  12. Estimating forest attribute parameters for small areas using nearest neighbors techniques

    Science.gov (United States)

    Ronald E. McRoberts

    2012-01-01

    Nearest neighbors techniques have become extremely popular, particularly for use with forest inventory data. With these techniques, a population unit prediction is calculated as a linear combination of observations for a selected number of population units in a sample that are most similar, or nearest, in a space of ancillary variables to the population unit requiring...

  13. Application of optimal estimation techniques to FFTF decay heat removal analysis

    International Nuclear Information System (INIS)

    Nutt, W.T.; Additon, S.L.; Parziale, E.A.

    1979-01-01

    The verification and adjustment of plant models for decay heat removal analysis using a mix of engineering judgment and formal techniques from control theory are discussed. The formal techniques facilitate dealing with typical test data which are noisy, redundant and do not measure all of the plant model state variables directly. Two pretest examples are presented. 5 refs

  14. Comparison of three techniques for estimating the forage intake of lactating dairy cows on pasture.

    Science.gov (United States)

    Macoon, B; Sollenberger, L E; Moore, J E; Staples, C R; Fike, J H; Portier, K M

    2003-09-01

    Quantifying DMI is necessary for estimation of nutrient consumption by ruminants, but it is inherently difficult on grazed pastures and even more so when supplements are fed. Our objectives were to compare three methods of estimating forage DMI (inference from animal performance, evaluation from fecal output using a pulse-dose marker, and estimation from herbage disappearance methods) and to identify the most useful approach or combination of approaches for estimating pasture intake by lactating dairy cows. During three continuous 28-d periods in the winter season, Holstein cows (Bos taurus; n = 32) grazed a cool-season grass or a cool-season grass-clover mixture at two stocking rates (SR; 5 vs. 2.5 cows/ha) and were fed two rates of concentrate supplementation (CS; 1 kg of concentrate [as-fed] per 2.5 or 3.5 kg of milk produced). Animal response data used in computations for the animal performance method were obtained from the latter 14 d of each period. For the pulse-dose marker method, chromium-mordanted fiber was used. Pasture sampling to determine herbage disappearance was done weekly throughout the study. Forage DMI estimated by the animal performance method was different among periods (P forage mass. The pulse-dose marker method generally provided greater estimates of forage DMI (as much as 11.0 kg/d more than the animal performance method) and was not correlated with the other methods. Estimates of forage DMI by the herbage disappearance method were correlated with the animal performance method. The difference between estimates from these two methods, ranging from -4.7 to 5.4 kg/d, were much lower than their difference from pulse-dose marker estimates. The results of this study suggest that, when appropriate for the research objectives, the animal performance or herbage disappearance methods may be useful and less costly alternatives to using the pulse-dose method.

  15. Estimation of low level gross alpha activities in the radioactive effluent using liquid scintillation counting technique

    International Nuclear Information System (INIS)

    Bhade, Sonali P.D.; Johnson, Bella E.; Singh, Sanjay; Babu, D.A.R.

    2012-01-01

    A technique has been developed for simultaneous measurement of gross alpha and gross beta activity concentration in low level liquid effluent samples in presence of higher activity concentrations of tritium. For this purpose, alpha beta discriminating Pulse Shape Analysis Liquid Scintillation Counting (LSC) technique was used. Main advantages of this technique are easy sample preparation, rapid measurement and higher sensitivity. The calibration methodology for Quantulus1220 LSC based on PSA technique using 241 Am and 90 Sr/ 90 Y as alpha and beta standards respectively was described in detail. LSC technique was validated by measuring alpha and beta activity concentrations in test samples with known amount of 241 Am and 90 Sr/ 90 Y activities spiked in distilled water. The results obtained by LSC technique were compared with conventional planchet counting methods such as ZnS(Ag) and end window GM detectors. The gross alpha and gross beta activity concentrations in spiked samples, obtained by LSC technique were found to be within ±5% of the reference values. (author)

  16. Using fuzzy logic to improve the project time and cost estimation based on Project Evaluation and Review Technique (PERT

    Directory of Open Access Journals (Sweden)

    Farhad Habibi

    2018-09-01

    Full Text Available Among different factors, correct scheduling is one of the vital elements for project management success. There are several ways to schedule projects including the Critical Path Method (CPM and Program Evaluation and Review Technique (PERT. Due to problems in estimating dura-tions of activities, these methods cannot accurately and completely model actual projects. The use of fuzzy theory is a basic way to improve scheduling and deal with such problems. Fuzzy theory approximates project scheduling models to reality by taking into account uncertainties in decision parameters and expert experience and mental models. This paper provides a step-by-step approach for accurate estimation of time and cost of projects using the Project Evaluation and Review Technique (PERT and expert views as fuzzy numbers. The proposed method included several steps. In the first step, the necessary information for project time and cost is estimated using the Critical Path Method (CPM and the Project Evaluation and Review Technique (PERT. The second step considers the duration and cost of the project activities as the trapezoidal fuzzy numbers, and then, the time and cost of the project are recalculated. The duration and cost of activities are estimated using the questionnaires as well as weighing the expert opinions, averaging and defuzzification based on a step-by-step algorithm. The calculating procedures for evaluating these methods are applied in a real project; and the obtained results are explained.

  17. New horizontal global solar radiation estimation models for Turkey based on robust coplot supported genetic programming technique

    International Nuclear Information System (INIS)

    Demirhan, Haydar; Kayhan Atilgan, Yasemin

    2015-01-01

    Highlights: • Precise horizontal global solar radiation estimation models are proposed for Turkey. • Genetic programming technique is used to construct the models. • Robust coplot analysis is applied to reduce the impact of outlier observations. • Better estimation and prediction properties are observed for the models. - Abstract: Renewable energy sources have been attracting more and more attention of researchers due to the diminishing and harmful nature of fossil energy sources. Because of the importance of solar energy as a renewable energy source, an accurate determination of significant covariates and their relationships with the amount of global solar radiation reaching the Earth is a critical research problem. There are numerous meteorological and terrestrial covariates that can be used in the analysis of horizontal global solar radiation. Some of these covariates are highly correlated with each other. It is possible to find a large variety of linear or non-linear models to explain the amount of horizontal global solar radiation. However, models that explain the amount of global solar radiation with the smallest set of covariates should be obtained. In this study, use of the robust coplot technique to reduce the number of covariates before going forward with advanced modelling techniques is considered. After reducing the dimensionality of model space, yearly and monthly mean daily horizontal global solar radiation estimation models for Turkey are built by using the genetic programming technique. It is observed that application of robust coplot analysis is helpful for building precise models that explain the amount of global solar radiation with the minimum number of covariates without suffering from outlier observations and the multicollinearity problem. Consequently, over a dataset of Turkey, precise yearly and monthly mean daily global solar radiation estimation models are introduced using the model spaces obtained by robust coplot technique and

  18. Comparing the accuracy and precision of three techniques used for estimating missing landmarks when reconstructing fossil hominin crania.

    Science.gov (United States)

    Neeser, Rudolph; Ackermann, Rebecca Rogers; Gain, James

    2009-09-01

    Various methodological approaches have been used for reconstructing fossil hominin remains in order to increase sample sizes and to better understand morphological variation. Among these, morphometric quantitative techniques for reconstruction are increasingly common. Here we compare the accuracy of three approaches--mean substitution, thin plate splines, and multiple linear regression--for estimating missing landmarks of damaged fossil specimens. Comparisons are made varying the number of missing landmarks, sample sizes, and the reference species of the population used to perform the estimation. The testing is performed on landmark data from individuals of Homo sapiens, Pan troglodytes and Gorilla gorilla, and nine hominin fossil specimens. Results suggest that when a small, same-species fossil reference sample is available to guide reconstructions, thin plate spline approaches perform best. However, if no such sample is available (or if the species of the damaged individual is uncertain), estimates of missing morphology based on a single individual (or even a small sample) of close taxonomic affinity are less accurate than those based on a large sample of individuals drawn from more distantly related extant populations using a technique (such as a regression method) able to leverage the information (e.g., variation/covariation patterning) contained in this large sample. Thin plate splines also show an unexpectedly large amount of error in estimating landmarks, especially over large areas. Recommendations are made for estimating missing landmarks under various scenarios. Copyright 2009 Wiley-Liss, Inc.

  19. Tissue printing technique in nitrocelullose membranes: a rapid detection technique for estimating incidence of PVX, PVY, PVS and PLRV viruses infecting potato (Solanum spp.

    Directory of Open Access Journals (Sweden)

    Mónica Guzmán

    2002-07-01

    Full Text Available The ELISA serological technique has been used since the 1970s as a quantative technique for the detection of many groups of virus which infect plants. The immune-impression (IMI in nitrocelullose membrane qualitative technique has been implemented more recently for the detection of different viral groups. In this work, the IMI technique has been adapted for the detection of PVX, PVY PVS and PLRV viruses which attack different species and varieties of potato crop (Solanum spp., such as Egg yolk, Capiro, Morita, Pastusa, Monserrate, Tuquerreña, ICA Puracé and ICA Nariño, all from the Nariño department. The four viruses mentioned above can cause 30% and 60% losses in production, be they acting alone or synergistically. This means that they can easily reduce the economic benefits of a country like Colombia, characterised as being a great potato producer (i.e. more than 2.8 million tons per year. The IMI technique was compared with the ELISA technique (Enzymne Linked Immunosorbent Assay using the same samples, leading to confirmation of the test's sensitivity for detecting the virus. From a total of 800 samples analyzed by IMI from different areas in the Nariño department, 72% incidence for PVY, 38.7% for PVX, 85.6% for PVS and 91.1% for PLRV was found; these estimates were similar or greater than those obtained using ELISA. These results are new for Colombia in terms of imple-menting the easy and sensitive IMI technique for detecting these four viral groups infecting the potato as well as estimating their incidence in Nariño, one of Colombia's most important potato-producing departments. The opportune and flexible detection of virus leads to an effective response to eradicating contaminated material, both material in the field as well as that from in vitro culture. Results lead to it being suggested that implementing IMI could bringing wide benefits for potato seed certification programmes, as they maintain sensitivity and specificity, they

  20. Applying a particle filtering technique for canola crop growth stage estimation in Canada

    Science.gov (United States)

    Sinha, Abhijit; Tan, Weikai; Li, Yifeng; McNairn, Heather; Jiao, Xianfeng; Hosseini, Mehdi

    2017-10-01

    Accurate crop growth stage estimation is important in precision agriculture as it facilitates improved crop management, pest and disease mitigation and resource planning. Earth observation imagery, specifically Synthetic Aperture Radar (SAR) data, can provide field level growth estimates while covering regional scales. In this paper, RADARSAT-2 quad polarization and TerraSAR-X dual polarization SAR data and ground truth growth stage data are used to model the influence of canola growth stages on SAR imagery extracted parameters. The details of the growth stage modeling work are provided, including a) the development of a new crop growth stage indicator that is continuous and suitable as the state variable in the dynamic estimation procedure; b) a selection procedure for SAR polarimetric parameters that is sensitive to both linear and nonlinear dependency between variables; and c) procedures for compensation of SAR polarimetric parameters for different beam modes. The data was collected over three crop growth seasons in Manitoba, Canada, and the growth model provides the foundation of a novel dynamic filtering framework for real-time estimation of canola growth stages using the multi-sensor and multi-mode SAR data. A description of the dynamic filtering framework that uses particle filter as the estimator is also provided in this paper.

  1. A virtually blind spectrum efficient channel estimation technique for mimo-ofdm system

    International Nuclear Information System (INIS)

    Ullah, M.O.

    2015-01-01

    Multiple-Input Multiple-Output antennas in conjunction with Orthogonal Frequency-Division Multiplexing is a dominant air interface for 4G and 5G cellular communication systems. Additionally, MIMO- OFDM based air interface is the foundation for latest wireless Local Area Networks, wireless Personal Area Networks, and digital multimedia broadcasting. Whether it is a single antenna or a multi-antenna OFDM system, accurate channel estimation is required for coherent reception. Training-based channel estimation methods require multiple pilot symbols and therefore waste a significant portion of channel bandwidth. This paper describes a virtually blind spectrum efficient channel estimation scheme for MIMO-OFDM systems which operates well below the Nyquist criterion. (author)

  2. River suspended sediment estimation by climatic variables implication: Comparative study among soft computing techniques

    Science.gov (United States)

    Kisi, Ozgur; Shiri, Jalal

    2012-06-01

    Estimating sediment volume carried by a river is an important issue in water resources engineering. This paper compares the accuracy of three different soft computing methods, Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Gene Expression Programming (GEP), in estimating daily suspended sediment concentration on rivers by using hydro-meteorological data. The daily rainfall, streamflow and suspended sediment concentration data from Eel River near Dos Rios, at California, USA are used as a case study. The comparison results indicate that the GEP model performs better than the other models in daily suspended sediment concentration estimation for the particular data sets used in this study. Levenberg-Marquardt, conjugate gradient and gradient descent training algorithms were used for the ANN models. Out of three algorithms, the Conjugate gradient algorithm was found to be better than the others.

  3. In-vivo validation of fast spectral velocity estimation techniques – preliminary results

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Gran, Fredrik; Pedersen, Mads Møller

    2008-01-01

    Spectral Doppler is a common way to estimate blood velocities in medical ultrasound (US). The standard way of estimating spectrograms is by using Welch's method (WM). WM is dependent on a long observation window (OW) (about 100 transmissions) to produce spectrograms with sufficient spectral...... resolution and contrast. Two adaptive filterbank methods have been suggested to circumvent this problem: the Blood spectral Power Capon method (BPC) and the Blood Amplitude and Phase Estimation method (BAPES). Previously, simulations and flow rig experiments have indicated that the two adaptive methods can...... was scanned using the experimental ultrasound scanner RASMUS and a B-K Medical 5 MHz linear array transducer with an angle of insonation not exceeding 60deg. All 280 spectrograms were then randomised and presented to a radiologist blinded for method and OW for visual evaluation: useful or not useful. WMbw...

  4. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Melius, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ong, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.

  5. Depth-of-interaction estimates in pixelated scintillator sensors using Monte Carlo techniques

    International Nuclear Information System (INIS)

    Sharma, Diksha; Sze, Christina; Bhandari, Harish; Nagarkar, Vivek; Badano, Aldo

    2017-01-01

    Image quality in thick scintillator detectors can be improved by minimizing parallax errors through depth-of-interaction (DOI) estimation. A novel sensor for low-energy single photon imaging having a thick, transparent, crystalline pixelated micro-columnar CsI:Tl scintillator structure has been described, with possible future application in small-animal single photon emission computed tomography (SPECT) imaging when using thicker structures under development. In order to understand the fundamental limits of this new structure, we introduce cartesianDETECT2, an open-source optical transport package that uses Monte Carlo methods to obtain estimates of DOI for improving spatial resolution of nuclear imaging applications. Optical photon paths are calculated as a function of varying simulation parameters such as columnar surface roughness, bulk, and top-surface absorption. We use scanning electron microscope images to estimate appropriate surface roughness coefficients. Simulation results are analyzed to model and establish patterns between DOI and photon scattering. The effect of varying starting locations of optical photons on the spatial response is studied. Bulk and top-surface absorption fractions were varied to investigate their effect on spatial response as a function of DOI. We investigated the accuracy of our DOI estimation model for a particular screen with various training and testing sets, and for all cases the percent error between the estimated and actual DOI over the majority of the detector thickness was ±5% with a maximum error of up to ±10% at deeper DOIs. In addition, we found that cartesianDETECT2 is computationally five times more efficient than MANTIS. Findings indicate that DOI estimates can be extracted from a double-Gaussian model of the detector response. We observed that our model predicts DOI in pixelated scintillator detectors reasonably well.

  6. Estimating primary productivity of tropical oil palm in Malaysia using remote sensing technique and ancillary data

    Science.gov (United States)

    Kanniah, K. D.; Tan, K. P.; Cracknell, A. P.

    2014-10-01

    The amount of carbon sequestration by vegetation can be estimated using vegetation productivity. At present, there is a knowledge gap in oil palm net primary productivity (NPP) at a regional scale. Therefore, in this study NPP of oil palm trees in Peninsular Malaysia was estimated using remote sensing based light use efficiency (LUE) model with inputs from local meteorological data, upscaled leaf area index/fractional photosynthetically active radiation (LAI/fPAR) derived using UK-DMC 2 satellite data and a constant maximum LUE value from the literature. NPP values estimated from the model was then compared and validated with NPP estimated using allometric equations developed by Corley and Tinker (2003), Henson (2003) and Syahrinudin (2005) with diameter at breast height, age and the height of the oil palm trees collected from three estates in Peninsular Malaysia. Results of this study show that oil palm NPP derived using a light use efficiency model increases with respect to the age of oil palm trees, and it stabilises after ten years old. The mean value of oil palm NPP at 118 plots as derived using the LUE model is 968.72 g C m-2 year-1 and this is 188% - 273% higher than the NPP derived from the allometric equations. The estimated oil palm NPP of young oil palm trees is lower compared to mature oil palm trees (oil palm trees contribute to lower oil palm LAI and therefore fPAR, which is an important variable in the LUE model. In contrast, it is noted that oil palm NPP decreases with respect to the age of oil palm trees as estimated using the allomeric equations. It was found in this study that LUE models could not capture NPP variation of oil palm trees if LAI/fPAR is used. On the other hand, tree height and DBH are found to be important variables that can capture changes in oil palm NPP as a function of age.

  7. Variance estimation for complex indicators of poverty and inequality using linearization techniques

    Directory of Open Access Journals (Sweden)

    Guillaume Osier

    2009-12-01

    Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.

  8. A MATLAB program for estimation of unsaturated hydraulic soil parameters using an infiltrometer technique

    DEFF Research Database (Denmark)

    Mollerup, Mikkel; Hansen, Søren; Petersen, Carsten

    2008-01-01

    We combined an inverse routine for assessing the hydraulic soil parameters of the Campbell/Mualem model with the power series solution developed by Philip for describing one-dimensional vertical infiltration into a homogenous soil. We based the estimation routine on a proposed measurement procedure....... An independent measurement of the soil water content at saturation may reduce the uncertainty of estimated parameters. Response surfaces of the objective function were analysed. Scenarios for various soils and conditions, using numerically generated synthetic cumulative infiltration data with normally...

  9. THE IMPROVEMENT OF ESTIMATION TECHNIQUE FOR EFFECTIVENESS OF INVESTMENT PROJECTS ON WASTE UTILIZATION

    Directory of Open Access Journals (Sweden)

    V.V. Krivorotov

    2008-06-01

    Full Text Available The main tendencies of the waste products formation and recycling in the Russian Federation and in the Sverdlovsk region have been analyzed and the principal factors restraining the inclusion of anthropogenic formations into the economic circulation have been revealed in the work. A technical approach to the estimation of both ecological and economic integral efficiency of the recycling projects that, in autors, opinion, secures higher objectivity of this estimation as well as the validity of the made decisions on their realization.

  10. Estimating the burden of pneumococcal pneumonia among adults: a systematic review and meta-analysis of diagnostic techniques.

    Directory of Open Access Journals (Sweden)

    Maria A Said

    Full Text Available Pneumococcal pneumonia causes significant morbidity and mortality among adults. Given limitations of diagnostic tests for non-bacteremic pneumococcal pneumonia, most studies report the incidence of bacteremic or invasive pneumococcal disease (IPD, and thus, grossly underestimate the pneumococcal pneumonia burden. We aimed to develop a conceptual and quantitative strategy to estimate the non-bacteremic disease burden among adults with community-acquired pneumonia (CAP using systematic study methods and the availability of a urine antigen assay.We performed a systematic literature review of studies providing information on the relative yield of various diagnostic assays (BinaxNOW® S. pneumoniae urine antigen test (UAT with blood and/or sputum culture in diagnosing pneumococcal pneumonia. We estimated the proportion of pneumococcal pneumonia that is bacteremic, the proportion of CAP attributable to pneumococcus, and the additional contribution of the Binax UAT beyond conventional diagnostic techniques, using random effects meta-analytic methods and bootstrapping. We included 35 studies in the analysis, predominantly from developed countries. The estimated proportion of pneumococcal pneumonia that is bacteremic was 24.8% (95% CI: 21.3%, 28.9%. The estimated proportion of CAP attributable to pneumococcus was 27.3% (95% CI: 23.9%, 31.1%. The Binax UAT diagnosed an additional 11.4% (95% CI: 9.6, 13.6% of CAP beyond conventional techniques. We were limited by the fact that not all patients underwent all diagnostic tests and by the sensitivity and specificity of the diagnostic tests themselves. We address these resulting biases and provide a range of plausible values in order to estimate the burden of pneumococcal pneumonia among adults.Estimating the adult burden of pneumococcal disease from bacteremic pneumococcal pneumonia data alone significantly underestimates the true burden of disease in adults. For every case of bacteremic pneumococcal pneumonia

  11. Blur kernel estimation with algebraic tomography technique and intensity profiles of object boundaries

    Science.gov (United States)

    Ingacheva, Anastasia; Chukalina, Marina; Khanipov, Timur; Nikolaev, Dmitry

    2018-04-01

    Motion blur caused by camera vibration is a common source of degradation in photographs. In this paper we study the problem of finding the point spread function (PSF) of a blurred image using the tomography technique. The PSF reconstruction result strongly depends on the particular tomography technique used. We present a tomography algorithm with regularization adapted specifically for this task. We use the algebraic reconstruction technique (ART algorithm) as the starting algorithm and introduce regularization. We use the conjugate gradient method for numerical implementation of the proposed approach. The algorithm is tested using a dataset which contains 9 kernels extracted from real photographs by the Adobe corporation where the point spread function is known. We also investigate influence of noise on the quality of image reconstruction and investigate how the number of projections influence the magnitude change of the reconstruction error.

  12. Estimation of fracture parameters in foam core materials using thermal techniques

    DEFF Research Database (Denmark)

    Dulieu-Barton, J. M.; Berggreen, Christian; Boyenval Langlois, C.

    2010-01-01

    is described. A mode I simulated crack in the form of a machined notch is used to establish the feasibility of the TSA approach to derive stress intensity factors for the foam material. The overall goal is to demonstrate that thermal techniques have the ability to provide deeper insight into the behaviour......The paper presents some initial work on establishing the stress state at a crack tip in PVC foam material using a non-contact infra-red technique known as thermoelastic stress analysis (TSA). A parametric study of the factors that may affect the thermoelastic response of the foam material...

  13. Using Quantitative Data Analysis Techniques for Bankruptcy Risk Estimation for Corporations

    Directory of Open Access Journals (Sweden)

    Ştefan Daniel ARMEANU

    2012-01-01

    Full Text Available Diversification of methods and techniques for quantification and management of risk has led to the development of many mathematical models, a large part of which focused on measuring bankruptcy risk for businesses. In financial analysis there are many indicators which can be used to assess the risk of bankruptcy of enterprises but to make an assessment it is needed to reduce the number of indicators and this can be achieved through principal component, cluster and discriminant analyses techniques. In this context, the article aims to build a scoring function used to identify bankrupt companies, using a sample of companies listed on Bucharest Stock Exchange.

  14. Comparison of different techniques for in microgravity-a simple mathematic estimation of cardiopulmonary resuscitation quality for space environment.

    Science.gov (United States)

    Braunecker, S; Douglas, B; Hinkelbein, J

    2015-07-01

    Since astronauts are selected carefully, are usually young, and are intensively observed before and during training, relevant medical problems are rare. Nevertheless, there is a certain risk for a cardiac arrest in space requiring cardiopulmonary resuscitation (CPR). Up to now, there are 5 known techniques to perform CPR in microgravity. The aim of the present study was to analyze different techniques for CPR during microgravity about quality of CPR. To identify relevant publications on CPR quality in microgravity, a systematic analysis with defined searching criteria was performed in the PubMed database (http://www.pubmed.com). For analysis, the keywords ("reanimation" or "CPR" or "resuscitation") and ("space" or "microgravity" or "weightlessness") and the specific names of the techniques ("Standard-technique" or "Straddling-manoeuvre" or "Reverse-bear-hug-technique" or "Evetts-Russomano-technique" or "Hand-stand-technique") were used. To compare quality and effectiveness of different techniques, we used the compression product (CP), a mathematical estimation for cardiac output. Using the predefined keywords for literature search, 4 different publications were identified (parabolic flight or under simulated conditions on earth) dealing with CPR efforts in microgravity and giving specific numbers. No study was performed under real-space conditions. Regarding compression depth, the handstand (HS) technique as well as the reverse bear hug (RBH) technique met parameters of the guidelines for CPR in 1G environments best (HS ratio, 0.91 ± 0.07; RBH ratio, 0.82 ± 0.13). Concerning compression rate, 4 of 5 techniques reached the required compression rate (ratio: HS, 1.08 ± 0.11; Evetts-Russomano [ER], 1.01 ± 0.06; standard side straddle, 1.00 ± 0.03; and straddling maneuver, 1.03 ± 0.12). The RBH method did not meet the required criteria (0.89 ± 0.09). The HS method showed the highest cardiac output (69.3% above the required CP), followed by the ER technique (33

  15. Early‐Stage Capital Cost Estimation of Biorefinery Processes: A Comparative Study of Heuristic Techniques

    Science.gov (United States)

    Couturier, Jean‐Luc; Kokossis, Antonis; Dubois, Jean‐Luc

    2016-01-01

    Abstract Biorefineries offer a promising alternative to fossil‐based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital‐intensive projects that involve state‐of‐the‐art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well‐documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early‐stage capital cost estimation tool suitable for biorefinery processes. PMID:27484398

  16. Properties of parameter estimation techniques for a beta-binomial failure model. Final technical report

    International Nuclear Information System (INIS)

    Shultis, J.K.; Buranapan, W.; Eckhoff, N.D.

    1981-12-01

    Of considerable importance in the safety analysis of nuclear power plants are methods to estimate the probability of failure-on-demand, p, of a plant component that normally is inactive and that may fail when activated or stressed. Properties of five methods for estimating from failure-on-demand data the parameters of the beta prior distribution in a compound beta-binomial probability model are examined. Simulated failure data generated from a known beta-binomial marginal distribution are used to estimate values of the beta parameters by (1) matching moments of the prior distribution to those of the data, (2) the maximum likelihood method based on the prior distribution, (3) a weighted marginal matching moments method, (4) an unweighted marginal matching moments method, and (5) the maximum likelihood method based on the marginal distribution. For small sample sizes (N = or < 10) with data typical of low failure probability components, it was found that the simple prior matching moments method is often superior (e.g. smallest bias and mean squared error) while for larger sample sizes the marginal maximum likelihood estimators appear to be best

  17. The Optical Fractionator Technique to Estimate Cell Numbers in a Rat Model of Electroconvulsive Therapy

    DEFF Research Database (Denmark)

    Olesen, Mikkel Vestergaard; Needham, Esther Kjær; Pakkenberg, Bente

    2017-01-01

    present the optical fractionator in conjunction with BrdU immunohistochemistry to estimate the production and survival of newly-formed neurons in the granule cell layer (including the sub-granular zone) of the rat hippocampus following electroconvulsive stimulation, which is among the most potent...

  18. Estimating changes in urban land and urban population using refined areal interpolation techniques

    Science.gov (United States)

    Zoraghein, Hamidreza; Leyk, Stefan

    2018-05-01

    The analysis of changes in urban land and population is important because the majority of future population growth will take place in urban areas. U.S. Census historically classifies urban land using population density and various land-use criteria. This study analyzes the reliability of census-defined urban lands for delineating the spatial distribution of urban population and estimating its changes over time. To overcome the problem of incompatible enumeration units between censuses, regular areal interpolation methods including Areal Weighting (AW) and Target Density Weighting (TDW), with and without spatial refinement, are implemented. The goal in this study is to estimate urban population in Massachusetts in 1990 and 2000 (source zones), within tract boundaries of the 2010 census (target zones), respectively, to create a consistent time series of comparable urban population estimates from 1990 to 2010. Spatial refinement is done using ancillary variables such as census-defined urban areas, the National Land Cover Database (NLCD) and the Global Human Settlement Layer (GHSL) as well as different combinations of them. The study results suggest that census-defined urban areas alone are not necessarily the most meaningful delineation of urban land. Instead, it appears that alternative combinations of the above-mentioned ancillary variables can better depict the spatial distribution of urban land, and thus make it possible to reduce the estimation error in transferring the urban population from source zones to target zones when running spatially-refined temporal areal interpolation.

  19. A triple isotope technique for estimation of fat and vitamin B12 malabsorption in Chrohn's disease

    International Nuclear Information System (INIS)

    Pedersen, N.T.; Rannem, T.

    1991-01-01

    A test for simultaneous estimation of vitamin B 12 and fat absorption from stool samples was investigated in 25 patients with severe diarrhoea after operation for Chrohn's disease. 51 CrCl 3 was ingested as a non-absorbable marker, 58 Co-cyanocobalamin as vitamin B 12 tracer, and 14 C-triolein as lipid tracer. Faeces were collected separately for three days. Some stool-to-stool variation in the 58 Co/ 51 Cr and 14 C/ 51 Cr ratios was seen. When the 58 Co-B 12 and 14 C-triolein excretion was estimated in samples of the two stools with the highest activities of 51 Cr, the variations of the estimates were less than ±10% and ±15% of the doses ingested, respectively. 12 of the 25 patients were not able to collect faeces and urine quantitatively and separately. However, in all patients faeces with sufficient radioactivity for simultaneous estimation of faecal 58 Co-B 12 and 14 C-triolein excretion from stool samples were obtained. 16 refs., 3 figs., 1 tab

  20. Evaluating the microscopic fecal technique for estimating hard mast in turkey diets

    Science.gov (United States)

    Mark A. Rumble; Stanley H. Anderson

    1993-01-01

    Wild and domestic dark turkeys (Meleagris gallopavo) were fed experimental diets containing acorn (Quercus gambelli), ponderosa pine (Pinus ponderosa) seed, grasses, forbs, and arthropods. In fecal estimates of diet composition, acorn and ponderosa pine seed were underestimated and grass was overestimated....

  1. A variational technique to estimate snowfall rate from coincident radar, snowflake, and fall-speed observations

    Science.gov (United States)

    Cooper, Steven J.; Wood, Norman B.; L'Ecuyer, Tristan S.

    2017-07-01

    Estimates of snowfall rate as derived from radar reflectivities alone are non-unique. Different combinations of snowflake microphysical properties and particle fall speeds can conspire to produce nearly identical snowfall rates for given radar reflectivity signatures. Such ambiguities can result in retrieval uncertainties on the order of 100-200 % for individual events. Here, we use observations of particle size distribution (PSD), fall speed, and snowflake habit from the Multi-Angle Snowflake Camera (MASC) to constrain estimates of snowfall derived from Ka-band ARM zenith radar (KAZR) measurements at the Atmospheric Radiation Measurement (ARM) North Slope Alaska (NSA) Climate Research Facility site at Barrow. MASC measurements of microphysical properties with uncertainties are introduced into a modified form of the optimal-estimation CloudSat snowfall algorithm (2C-SNOW-PROFILE) via the a priori guess and variance terms. Use of the MASC fall speed, MASC PSD, and CloudSat snow particle model as base assumptions resulted in retrieved total accumulations with a -18 % difference relative to nearby National Weather Service (NWS) observations over five snow events. The average error was 36 % for the individual events. Use of different but reasonable combinations of retrieval assumptions resulted in estimated snowfall accumulations with differences ranging from -64 to +122 % for the same storm events. Retrieved snowfall rates were particularly sensitive to assumed fall speed and habit, suggesting that in situ measurements can help to constrain key snowfall retrieval uncertainties. More accurate knowledge of these properties dependent upon location and meteorological conditions should help refine and improve ground- and space-based radar estimates of snowfall.

  2. A spatial compression technique for head-related transfer function interpolation and complexity estimation

    DEFF Research Database (Denmark)

    Shekarchi, Sayedali; Christensen-Dalsgaard, Jakob; Hallam, John

    2015-01-01

    A head-related transfer function (HRTF) model employing Legendre polynomials (LPs) is evaluated as an HRTF spatial complexity indicator and interpolation technique in the azimuth plane. LPs are a set of orthogonal functions derived on the sphere which can be used to compress an HRTF dataset...

  3. Propensity Score Estimation with Data Mining Techniques: Alternatives to Logistic Regression

    Science.gov (United States)

    Keller, Bryan S. B.; Kim, Jee-Seon; Steiner, Peter M.

    2013-01-01

    Propensity score analysis (PSA) is a methodological technique which may correct for selection bias in a quasi-experiment by modeling the selection process using observed covariates. Because logistic regression is well understood by researchers in a variety of fields and easy to implement in a number of popular software packages, it has…

  4. Estimating bridge stiffness using a forced-vibration technique for timber bridge health monitoring

    Science.gov (United States)

    James P. Wacker; Xiping Wang; Brian Brashaw; Robert J. Ross

    2006-01-01

    This paper describes an effort to refine a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the frequency response of several simple-span, sawn timber beam (with plank deck) bridges located in St. Louis County, Minnesota. Static load deflections were also measured to...

  5. Ground Receiving Station Reference Pair Selection Technique for a Minimum Configuration 3D Emitter Position Estimation Multilateration System

    Directory of Open Access Journals (Sweden)

    Abdulmalik Shehu Yaro

    2017-01-01

    Full Text Available Multilateration estimates aircraft position using the Time Difference Of Arrival (TDOA with a lateration algorithm. The Position Estimation (PE accuracy of the lateration algorithm depends on several factors which are the TDOA estimation error, the lateration algorithm approach, the number of deployed GRSs and the selection of the GRS reference used for the PE process. Using the minimum number of GRSs for 3D emitter PE, a technique based on the condition number calculation is proposed to select the suitable GRS reference pair for improving the accuracy of the PE using the lateration algorithm. Validation of the proposed technique was performed with the GRSs in the square and triangular GRS configuration. For the selected emitter positions, the result shows that the proposed technique can be used to select the suitable GRS reference pair for the PE process. A unity condition number is achieved for GRS pair most suitable for the PE process. Monte Carlo simulation result, in comparison with the fixed GRS reference pair lateration algorithm, shows a reduction in PE error of at least 70% for both GRS in the square and triangular configuration.

  6. Fatigue life estimation of a 1D aluminum beam under mode-I loading using the electromechanical impedance technique

    International Nuclear Information System (INIS)

    Lim, Yee Yan; Soh, Chee Kiong

    2011-01-01

    Structures in service are often subjected to fatigue loads. Cracks would develop and lead to failure if left unnoticed after a large number of cyclic loadings. Monitoring the process of fatigue crack propagation as well as estimating the remaining useful life of a structure is thus essential to prevent catastrophe while minimizing earlier-than-required replacement. The advent of smart materials such as piezo-impedance transducers (lead zirconate titanate, PZT) has ushered in a new era of structural health monitoring (SHM) based on non-destructive evaluation (NDE). This paper presents a series of investigative studies to evaluate the feasibility of fatigue crack monitoring and estimation of remaining useful life using the electromechanical impedance (EMI) technique employing a PZT transducer. Experimental tests were conducted to study the ability of the EMI technique in monitoring fatigue crack in 1D lab-sized aluminum beams. The experimental results prove that the EMI technique is very sensitive to fatigue crack propagation. A proof-of-concept semi-analytical damage model for fatigue life estimation has been developed by incorporating the linear elastic fracture mechanics (LEFM) theory into the finite element (FE) model. The prediction of the model matches closely with the experiment, suggesting the possibility of replacing costly experiments in future

  7. Fatigue life estimation of a 1D aluminum beam under mode-I loading using the electromechanical impedance technique

    Science.gov (United States)

    Lim, Yee Yan; Kiong Soh, Chee

    2011-12-01

    Structures in service are often subjected to fatigue loads. Cracks would develop and lead to failure if left unnoticed after a large number of cyclic loadings. Monitoring the process of fatigue crack propagation as well as estimating the remaining useful life of a structure is thus essential to prevent catastrophe while minimizing earlier-than-required replacement. The advent of smart materials such as piezo-impedance transducers (lead zirconate titanate, PZT) has ushered in a new era of structural health monitoring (SHM) based on non-destructive evaluation (NDE). This paper presents a series of investigative studies to evaluate the feasibility of fatigue crack monitoring and estimation of remaining useful life using the electromechanical impedance (EMI) technique employing a PZT transducer. Experimental tests were conducted to study the ability of the EMI technique in monitoring fatigue crack in 1D lab-sized aluminum beams. The experimental results prove that the EMI technique is very sensitive to fatigue crack propagation. A proof-of-concept semi-analytical damage model for fatigue life estimation has been developed by incorporating the linear elastic fracture mechanics (LEFM) theory into the finite element (FE) model. The prediction of the model matches closely with the experiment, suggesting the possibility of replacing costly experiments in future.

  8. A multi-feature integration method for fatigue crack detection and crack length estimation in riveted lap joints using Lamb waves

    Science.gov (United States)

    He, Jingjing; Guan, Xuefei; Peng, Tishun; Liu, Yongming; Saxena, Abhinav; Celaya, Jose; Goebel, Kai

    2013-10-01

    This paper presents an experimental study of damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in situ non-destructive evaluation (NDE) during fatigue cyclical loading. PZT wafers are used to monitor the wave reflection from the boundaries of the fatigue crack at the edge of bolt joints. The group velocity of the guided wave is calculated to select a proper time window in which the received signal contains the damage information. It is found that the fatigue crack lengths are correlated with three main features of the signal, i.e., correlation coefficient, amplitude change, and phase change. It was also observed that a single feature cannot be used to quantify the damage among different specimens since a considerable variability was observed in the response from different specimens. A multi-feature integration method based on a second-order multivariate regression analysis is proposed for the prediction of fatigue crack lengths using sensor measurements. The model parameters are obtained using training datasets from five specimens. The effectiveness of the proposed methodology is demonstrated using several lap joint specimens from different manufactures and under different loading conditions.

  9. A multi-feature integration method for fatigue crack detection and crack length estimation in riveted lap joints using Lamb waves

    International Nuclear Information System (INIS)

    He, Jingjing; Guan, Xuefei; Peng, Tishun; Liu, Yongming; Saxena, Abhinav; Celaya, Jose; Goebel, Kai

    2013-01-01

    This paper presents an experimental study of damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in situ non-destructive evaluation (NDE) during fatigue cyclical loading. PZT wafers are used to monitor the wave reflection from the boundaries of the fatigue crack at the edge of bolt joints. The group velocity of the guided wave is calculated to select a proper time window in which the received signal contains the damage information. It is found that the fatigue crack lengths are correlated with three main features of the signal, i.e., correlation coefficient, amplitude change, and phase change. It was also observed that a single feature cannot be used to quantify the damage among different specimens since a considerable variability was observed in the response from different specimens. A multi-feature integration method based on a second-order multivariate regression analysis is proposed for the prediction of fatigue crack lengths using sensor measurements. The model parameters are obtained using training datasets from five specimens. The effectiveness of the proposed methodology is demonstrated using several lap joint specimens from different manufactures and under different loading conditions. (paper)

  10. A Robust Parametric Technique for Multipath Channel Estimation in the Uplink of a DS-CDMA System

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available The problem of estimating the multipath channel parameters of a new user entering the uplink of an asynchronous direct sequence-code division multiple access (DS-CDMA system is addressed. The problem is described via a least squares (LS cost function with a rich structure. This cost function, which is nonlinear with respect to the time delays and linear with respect to the gains of the multipath channel, is proved to be approximately decoupled in terms of the path delays. Due to this structure, an iterative procedure of 1D searches is adequate for time delays estimation. The resulting method is computationally efficient, does not require any specific pilot signal, and performs well for a small number of training symbols. Simulation results show that the proposed technique offers a better estimation accuracy compared to existing related methods, and is robust to multiple access interference.

  11. Estimation of leaf area index using ground-based remote sensed NDVI measurements: validation and comparison with two indirect techniques

    International Nuclear Information System (INIS)

    Pontailler, J.-Y.; Hymus, G.J.; Drake, B.G.

    2003-01-01

    This study took place in an evergreen scrub oak ecosystem in Florida. Vegetation reflectance was measured in situ with a laboratory-made sensor in the red (640-665 nm) and near-infrared (750-950 nm) bands to calculate the normalized difference vegetation index (NDVI) and derive the leaf area index (LAI). LAI estimates from this technique were compared with two other nondestructive techniques, intercepted photosynthetically active radiation (PAR) and hemispherical photographs, in four contrasting 4 m 2 plots in February 2000 and two 4m 2 plots in June 2000. We used Beer's law to derive LAI from PAR interception and gap fraction distribution to derive LAI from photographs. The plots were harvested manually after the measurements to determine a 'true' LAI value and to calculate a light extinction coefficient (k). The technique based on Beer's law was affected by a large variation of the extinction coefficient, owing to the larger impact of branches in winter when LAI was low. Hemispherical photographs provided satisfactory estimates, slightly overestimated in winter because of the impact of branches or underestimated in summer because of foliage clumping. NDVI provided the best fit, showing only saturation in the densest plot (LAI = 3.5). We conclude that in situ measurement of NDVI is an accurate and simple technique to nondestructively assess LAI in experimental plots or in crops if saturation remains acceptable. (author)

  12. Estimation of leaf area index using ground-based remote sensed NDVI measurements: validation and comparison with two indirect techniques

    Energy Technology Data Exchange (ETDEWEB)

    Pontailler, J.-Y. [Univ. Paris-Sud XI, Dept. d' Ecophysiologie Vegetale, Orsay Cedex (France); Hymus, G.J.; Drake, B.G. [Smithsonian Environmental Research Center, Kennedy Space Center, Florida (United States)

    2003-06-01

    This study took place in an evergreen scrub oak ecosystem in Florida. Vegetation reflectance was measured in situ with a laboratory-made sensor in the red (640-665 nm) and near-infrared (750-950 nm) bands to calculate the normalized difference vegetation index (NDVI) and derive the leaf area index (LAI). LAI estimates from this technique were compared with two other nondestructive techniques, intercepted photosynthetically active radiation (PAR) and hemispherical photographs, in four contrasting 4 m{sup 2} plots in February 2000 and two 4m{sup 2} plots in June 2000. We used Beer's law to derive LAI from PAR interception and gap fraction distribution to derive LAI from photographs. The plots were harvested manually after the measurements to determine a 'true' LAI value and to calculate a light extinction coefficient (k). The technique based on Beer's law was affected by a large variation of the extinction coefficient, owing to the larger impact of branches in winter when LAI was low. Hemispherical photographs provided satisfactory estimates, slightly overestimated in winter because of the impact of branches or underestimated in summer because of foliage clumping. NDVI provided the best fit, showing only saturation in the densest plot (LAI = 3.5). We conclude that in situ measurement of NDVI is an accurate and simple technique to nondestructively assess LAI in experimental plots or in crops if saturation remains acceptable. (author)

  13. Capacity Estimation and Near-Capacity Achieving Techniques for Digitally Modulated Communication Systems

    DEFF Research Database (Denmark)

    Yankov, Metodi Plamenov

    investigation will include linear interference channels of high dimensionality (such as multiple-input multiple-output), and the non-linear optical fiber channel, which has been gathering more and more attention from the information theory community in recent years. In both cases novel CCC estimates and lower......This thesis studies potential improvements that can be made to the current data rates of digital communication systems. The physical layer of the system will be investigated in band-limited scenarios, where high spectral efficiency is necessary in order to meet the ever-growing data rate demand....... Several issues are tackled, both with theoretical and more practical aspects. The theoretical part is mainly concerned with estimating the constellation constrained capacity (CCC) of channels with discrete input, which is an inherent property of digital communication systems. The channels under...

  14. Approaching bathymetry estimation from high resolution multispectral satellite images using a neuro-fuzzy technique

    Science.gov (United States)

    Corucci, Linda; Masini, Andrea; Cococcioni, Marco

    2011-01-01

    This paper addresses bathymetry estimation from high resolution multispectral satellite images by proposing an accurate supervised method, based on a neuro-fuzzy approach. The method is applied to two Quickbird images of the same area, acquired in different years and meteorological conditions, and is validated using truth data. Performance is studied in different realistic situations of in situ data availability. The method allows to achieve a mean standard deviation of 36.7 cm for estimated water depths in the range [-18, -1] m. When only data collected along a closed path are used as a training set, a mean STD of 45 cm is obtained. The effect of both meteorological conditions and training set size reduction on the overall performance is also investigated.

  15. Estimated sedimentation rate by radionuclide techniques at Lam Phra Phloeng dam, Northeastern of Thailand

    International Nuclear Information System (INIS)

    Sasimonton Moungsrijun; Kanitha Srisuksawad; Kosit Lorsirirat; Tuangrak Nantawisarakul

    2009-01-01

    The Lam Phra Phloeng dam is located in Nakhon Ratchasima province, northeastern of Thailand. Since it was constructed in 1963, the dam is under severe reduction of its water storage capacity caused by deforestation to agricultural land at the upper catchment. Sediment cores were collected using a gravity corer. Sedimentation rates were estimated from the vertical distribution of unsupported Pb-210 in sediment cores. Total Pb-210 was determined by measuring Po-210 activities. The Po-210 and Ra-226 activities were used to determine the rate of sediment by using alpha and gamma spectrometry. The sedimentation rate was estimated using the Constant Initial Concentration model (CIC), the sedimentation rate crest dam 0.265 gcm -2 y -1 and the upstream 0.213 gcm -2 y -1 (Author)

  16. A new technique for testing distribution of knowledge and to estimate sampling sufficiency in ethnobiology studies.

    Science.gov (United States)

    Araújo, Thiago Antonio Sousa; Almeida, Alyson Luiz Santos; Melo, Joabe Gomes; Medeiros, Maria Franco Trindade; Ramos, Marcelo Alves; Silva, Rafael Ricardo Vasconcelos; Almeida, Cecília Fátima Castelo Branco Rangel; Albuquerque, Ulysses Paulino

    2012-03-15

    We propose a new quantitative measure that enables the researcher to make decisions and test hypotheses about the distribution of knowledge in a community and estimate the richness and sharing of information among informants. In our study, this measure has two levels of analysis: intracultural and intrafamily. Using data collected in northeastern Brazil, we evaluated how these new estimators of richness and sharing behave for different categories of use. We observed trends in the distribution of the characteristics of informants. We were also able to evaluate how outliers interfere with these analyses and how other analyses may be conducted using these indices, such as determining the distance between the knowledge of a community and that of experts, as well as exhibiting the importance of these individuals' communal information of biological resources. One of the primary applications of these indices is to supply the researcher with an objective tool to evaluate the scope and behavior of the collected data.

  17. Application of Ambient Analysis Techniques for the Estimation of Electromechanical Oscillations from Measured PMU Data in Four Different Power Systems

    DEFF Research Database (Denmark)

    Vanfretti, Luigi; Dosiek, Luke; Pierre, John W.

    2011-01-01

    The application of advanced signal processing techniques to power system measurement data for the estimation of dynamic properties has been a research subject for over two decades. Several techniques have been applied to transient (or ringdown) data, ambient data, and to probing data. Some...... of these methodologies have been included in off-line analysis software, and are now being incorporated into software tools used in control rooms for monitoring the near real-time behavior of power system dynamics. In this paper we illustrate the practical application of some ambient analysis methods...... and planners as they provide information of the applicability of these techniques via readily available signal processing tools, and in addition, it is shown how to critically analyze the results obtained with these methods....

  18. Weight estimates and packaging techniques for the microwave radiometer spacecraft. [shuttle compatible design

    Science.gov (United States)

    Jensen, J. K.; Wright, R. L.

    1981-01-01

    Estimates of total spacecraft weight and packaging options were made for three conceptual designs of a microwave radiometer spacecraft. Erectable structures were found to be slightly lighter than deployable structures but could be packaged in one-tenth the volume. The tension rim concept, an unconventional design approach, was found to be the lightest and transportable to orbit in the least number of shuttle flights.

  19. Improvement of Bragg peak shift estimation using dimensionality reduction techniques and predictive linear modeling

    Science.gov (United States)

    Xing, Yafei; Macq, Benoit

    2017-11-01

    With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63+/-19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 +/- 0.22 mm and 1.84 +/- 8.98 mm for the learning model, while for PS method it ranged between 0.3 +/- 0.25 mm and 20.71 +/- 8.38 mm.

  20. Parameter estimation in astronomy through application of the likelihood ratio. [satellite data analysis techniques

    Science.gov (United States)

    Cash, W.

    1979-01-01

    Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.

  1. Data-driven Techniques to Estimate Parameters in the Homogenized Energy Model for Shape Memory Alloys

    Science.gov (United States)

    2011-11-01

    Both cases are compared to experimental data at various temperatures, and the optimized model parameters are compared to the initial estimates. 1...applications. The super-elastic effect has been utilized in orthodontic wires, eye-glass frames, stents, and annuloplasty bands [23]. Applications using...should be addressed. E-mail:jhcrews@ncsu.edu 1 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of

  2. Application on technique of joint time-frequency analysis of seismic signal's first arrival estimation

    International Nuclear Information System (INIS)

    Xu Chaoyang; Liu Junmin; Fan Yanfang; Ji Guohua

    2008-01-01

    Joint time-frequency analysis is conducted to construct one joint density function of time and frequency. It can open out one signal's frequency components and their evolvements. It is the new evolvement of Fourier analysis. In this paper, according to the characteristic of seismic signal's noise, one estimation method of seismic signal's first arrival based on triple correlation of joint time-frequency spectrum is introduced, and the results of experiment and conclusion are presented. (authors)

  3. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    Science.gov (United States)

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  4. Estimation of dew point temperature using neuro-fuzzy and neural network techniques

    Science.gov (United States)

    Kisi, Ozgur; Kim, Sungwon; Shiri, Jalal

    2013-11-01

    This study investigates the ability of two different artificial neural network (ANN) models, generalized regression neural networks model (GRNNM) and Kohonen self-organizing feature maps neural networks model (KSOFM), and two different adaptive neural fuzzy inference system (ANFIS) models, ANFIS model with sub-clustering identification (ANFIS-SC) and ANFIS model with grid partitioning identification (ANFIS-GP), for estimating daily dew point temperature. The climatic data that consisted of 8 years of daily records of air temperature, sunshine hours, wind speed, saturation vapor pressure, relative humidity, and dew point temperature from three weather stations, Daego, Pohang, and Ulsan, in South Korea were used in the study. The estimates of ANN and ANFIS models were compared according to the three different statistics, root mean square errors, mean absolute errors, and determination coefficient. Comparison results revealed that the ANFIS-SC, ANFIS-GP, and GRNNM models showed almost the same accuracy and they performed better than the KSOFM model. Results also indicated that the sunshine hours, wind speed, and saturation vapor pressure have little effect on dew point temperature. It was found that the dew point temperature could be successfully estimated by using T mean and R H variables.

  5. An innovative technique for estimating water saturation from capillary pressure in clastic reservoirs

    Science.gov (United States)

    Adeoti, Lukumon; Ayolabi, Elijah Adebowale; James, Logan

    2017-11-01

    A major drawback of old resistivity tools is the poor vertical resolution and estimation of hydrocarbon when applying water saturation (Sw) from historical resistivity method. In this study, we have provided an alternative method called saturation height function to estimate hydrocarbon in some clastic reservoirs in the Niger Delta. The saturation height function was derived from pseudo capillary pressure curves generated using modern wells with complete log data. Our method was based on the determination of rock type from log derived porosity-permeability relationship, supported by volume of shale for its classification into different zones. Leverette-J functions were derived for each rock type. Our results show good correlation between Sw from resistivity based method and Sw from pseudo capillary pressure curves in wells with modern log data. The resistivity based model overestimates Sw in some wells while Sw from the pseudo capillary pressure curves validates and predicts more accurate Sw. In addition, the result of Sw from pseudo capillary pressure curves replaces that of resistivity based model in a well where the resistivity equipment failed. The plot of hydrocarbon pore volume (HCPV) from J-function against HCPV from Archie shows that wells with high HCPV have high sand qualities and vice versa. This was further used to predict the geometry of stratigraphic units. The model presented here freshly addresses the gap in the estimation of Sw and is applicable to reservoirs of similar rock type in other frontier basins worldwide.

  6. Genetic divergence of rubber tree estimated by multivariate techniques and microsatellite markers

    Directory of Open Access Journals (Sweden)

    Lígia Regina Lima Gouvêa

    2010-01-01

    Full Text Available Genetic diversity of 60 Hevea genotypes, consisting of Asiatic, Amazonian, African and IAC clones, and pertaining to the genetic breeding program of the Agronomic Institute (IAC, Brazil, was estimated. Analyses were based on phenotypic multivariate parameters and microsatellites. Five agronomic descriptors were employed in multivariate procedures, such as Standard Euclidian Distance, Tocher clustering and principal component analysis. Genetic variability among the genotypes was estimated with 68 selected polymorphic SSRs, by way of Modified Rogers Genetic Distance and UPGMA clustering. Structure software in a Bayesian approach was used in discriminating among groups. Genetic diversity was estimated through Nei's statistics. The genotypes were clustered into 12 groups according to the Tocher method, while the molecular analysis identified six groups. In the phenotypic and microsatellite analyses, the Amazonian and IAC genotypes were distributed in several groups, whereas the Asiatic were in only a few. Observed heterozygosity ranged from 0.05 to 0.96. Both high total diversity (H T' = 0.58 and high gene differentiation (Gst' = 0.61 were observed, and indicated high genetic variation among the 60 genotypes, which may be useful for breeding programs. The analyzed agronomic parameters and SSRs markers were effective in assessing genetic diversity among Hevea genotypes, besides proving to be useful for characterizing genetic variability.

  7. Data Mining Techniques to Estimate Plutonium, Initial Enrichment, Burnup, and Cooling Time in Spent Fuel Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Trellue, Holly Renee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fugate, Michael Lynn [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tobin, Stephen Joesph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-19

    The Next Generation Safeguards Initiative (NGSI), Office of Nonproliferation and Arms Control (NPAC), National Nuclear Security Administration (NNSA) of the U.S. Department of Energy (DOE) has sponsored a multi-laboratory, university, international partner collaboration to (1) detect replaced or missing pins from spent fuel assemblies (SFA) to confirm item integrity and deter diversion, (2) determine plutonium mass and related plutonium and uranium fissile mass parameters in SFAs, and (3) verify initial enrichment (IE), burnup (BU), and cooling time (CT) of facility declaration for SFAs. A wide variety of nondestructive assay (NDA) techniques were researched to achieve these goals [Veal, 2010 and Humphrey, 2012]. In addition, the project includes two related activities with facility-specific benefits: (1) determination of heat content and (2) determination of reactivity (multiplication). In this research, a subset of 11 integrated NDA techniques was researched using data mining solutions at Los Alamos National Laboratory (LANL) for their ability to achieve the above goals.

  8. Application of fission track technique for estimation of uranium concentration in drinking waters of Punjab

    International Nuclear Information System (INIS)

    Prabhu, S.P.; Sawant, P.D.; Raj, S.S.; Kumar, A.; Sarkar, P.K.; Tripathi, R.M.

    2012-01-01

    Drinking water samples were collected from four different districts, namely Bhatinda, Mansa, Faridkot and Firozpur, of Punjab for ascertaining the U(nat.) concentrations. All samples were preserved, processed and analyzed by laser fluorimetry (LF). To ensure accuracy of the data obtained by LF, few samples (10 nos) from each district were analyzed by alpha spectrometry as well as by fission track analysis (FTA) technique. For FTA technique few μl of water sample was transferred to polythene tube, lexan detector was immersed in it and the other end of the tube was also heat-sealed. Two samples and one uranium standard were irradiated in DHRUVA reactor. Irradiated detectors were chemically etched and tracks counted using an optical microscope. Uranium concentrations in samples ranged from 3.2 to 60.5 ppb and were comparable with those observed by LF. (author)

  9. A differential absorption technique to estimate atmospheric total water vapor amounts

    Science.gov (United States)

    Frouin, Robert; Middleton, Elizabeth

    1990-01-01

    Vertically integrated water-vapor amounts can be remotely determined by measuring the solar radiance reflected by the earth's surface with satellites or aircraft-based instruments. The technique is based on the method by Fowle (1912, 1913) and utilizes the 0.940-micron water-vapor band to retrieve total-water-vapor data that is independent of surface reflectance properties and other atmospheric constituents. A channel combination is proposed to provide more accurate results, the SE-590 spectrometer is used to verify the data, and the effects of atmospheric photon backscattering is examined. The spectrometer and radiosonde data confirm the accuracy of using a narrow and a wide channel centered on the same wavelength to determine water vapor amounts. The technique is suitable for cloudless conditions and can contribute to atmospheric corrections of land-surface parameters.

  10. Application of the luminescence single-aliquot technique for dose estimation in the Marmara Sea

    International Nuclear Information System (INIS)

    Tanir, Guenes; Sencan, Emine; Boeluekdemir, M. Hicabi; Tuerkoez, M. Burak; Tel, Eyuep

    2005-01-01

    The aim of this study is to obtain the equivalent dose, which is the important quantity for all the studies related to the use of luminescence in dating sediments. Recent advances in luminescence dating have led to increasing application of the technique to sediment from the depositional environmental samples. The sample used in this study is the active main fault sample that was collected from the Sea of Marmara in NW Turkey. Equivalent dose was measured using both the multiple-aliquots and the single-aliquot techniques. In this study single aliquot regeneration on additive dose (SARA) procedure was also used. The result obtained was not in agreement with the results evaluated from the multiple-aliquots procedure. So a simple modification was suggested for SARA procedure. In our modified procedure the calculated dose (D) values were obtained by using the additive dose protocol instead of regeneration protocol

  11. Skill Assessment of An Hybrid Technique To Estimate Quantitative Precipitation Forecast For Galicia (nw Spain)

    Science.gov (United States)

    Lage, A.; Taboada, J. J.

    Precipitation is the most obvious of the weather elements in its effects on normal life. Numerical weather prediction (NWP) is generally used to produce quantitative precip- itation forecast (QPF) beyond the 1-3 h time frame. These models often fail to predict small-scale variations of rain because of spin-up problems and their coarse spatial and temporal resolution (Antolik, 2000). Moreover, there are some uncertainties about the behaviour of the NWP models in extreme situations (de Bruijn and Brandsma, 2000). Hybrid techniques, combining the benefits of NWP and statistical approaches in a flexible way, are very useful to achieve a good QPF. In this work, a new technique of QPF for Galicia (NW of Spain) is presented. This region has a percentage of rainy days per year greater than 50% with quantities that may cause floods, with human and economical damages. The technique is composed of a NWP model (ARPS) and a statistical downscaling process based on an automated classification scheme of at- mospheric circulation patterns for the Iberian Peninsula (J. Ribalaygua and R. Boren, 1995). Results show that QPF for Galicia is improved using this hybrid technique. [1] Antolik, M.S. 2000 "An Overview of the National Weather Service's centralized statistical quantitative precipitation forecasts". Journal of Hydrology, 239, pp:306- 337. [2] de Bruijn, E.I.F and T. Brandsma "Rainfall prediction for a flooding event in Ireland caused by the remnants of Hurricane Charley". Journal of Hydrology, 239, pp:148-161. [3] Ribalaygua, J. and Boren R. "Clasificación de patrones espaciales de precipitación diaria sobre la España Peninsular". Informes N 3 y 4 del Servicio de Análisis e Investigación del Clima. Instituto Nacional de Meteorología. Madrid. 53 pp.

  12. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    Energy Technology Data Exchange (ETDEWEB)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu [Department of Physics and Astronomy, University of British Columbia, Vancouver V5Z 1L8 (Canada); Celler, Anna [Department of Radiology, University of British Columbia, Vancouver V5Z 1L8 (Canada)

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming the same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90

  13. Estimating surface soil moisture from SMAP observations using a Neural Network technique.

    Science.gov (United States)

    Kolassa, J; Reichle, R H; Liu, Q; Alemohammad, S H; Gentine, P; Aida, K; Asanuma, J; Bircher, S; Caldwell, T; Colliander, A; Cosh, M; Collins, C Holifield; Jackson, T J; Martínez-Fernández, J; McNairn, H; Pacheco, A; Thibeault, M; Walker, J P

    2018-01-01

    A Neural Network (NN) algorithm was developed to estimate global surface soil moisture for April 2015 to March 2017 with a 2-3 day repeat frequency using passive microwave observations from the Soil Moisture Active Passive (SMAP) satellite, surface soil temperatures from the NASA Goddard Earth Observing System Model version 5 (GEOS-5) land modeling system, and Moderate Resolution Imaging Spectroradiometer-based vegetation water content. The NN was trained on GEOS-5 soil moisture target data, making the NN estimates consistent with the GEOS-5 climatology, such that they may ultimately be assimilated into this model without further bias correction. Evaluated against in situ soil moisture measurements, the average unbiased root mean square error (ubRMSE), correlation and anomaly correlation of the NN retrievals were 0.037 m 3 m -3 , 0.70 and 0.66, respectively, against SMAP core validation site measurements and 0.026 m 3 m -3 , 0.58 and 0.48, respectively, against International Soil Moisture Network (ISMN) measurements. At the core validation sites, the NN retrievals have a significantly higher skill than the GEOS-5 model estimates and a slightly lower correlation skill than the SMAP Level-2 Passive (L2P) product. The feasibility of the NN method was reflected by a lower ubRMSE compared to the L2P retrievals as well as a higher skill when ancillary parameters in physically-based retrievals were uncertain. Against ISMN measurements, the skill of the two retrieval products was more comparable. A triple collocation analysis against Advanced Microwave Scanning Radiometer 2 (AMSR2) and Advanced Scatterometer (ASCAT) soil moisture retrievals showed that the NN and L2P retrieval errors have a similar spatial distribution, but the NN retrieval errors are generally lower in densely vegetated regions and transition zones.

  14. Solar resources estimation combining digital terrain models and satellite images techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bosch, J.L.; Batlles, F.J. [Universidad de Almeria, Departamento de Fisica Aplicada, Ctra. Sacramento s/n, 04120-Almeria (Spain); Zarzalejo, L.F. [CIEMAT, Departamento de Energia, Madrid (Spain); Lopez, G. [EPS-Universidad de Huelva, Departamento de Ingenieria Electrica y Termica, Huelva (Spain)

    2010-12-15

    One of the most important steps to make use of any renewable energy is to perform an accurate estimation of the resource that has to be exploited. In the designing process of both active and passive solar energy systems, radiation data is required for the site, with proper spatial resolution. Generally, a radiometric stations network is used in this evaluation, but when they are too dispersed or not available for the study area, satellite images can be utilized as indirect solar radiation measurements. Although satellite images cover wide areas with a good acquisition frequency they usually have a poor spatial resolution limited by the size of the image pixel, and irradiation must be interpolated to evaluate solar irradiation at a sub-pixel scale. When pixels are located in flat and homogeneous areas, correlation of solar irradiation is relatively high, and classic interpolation can provide a good estimation. However, in complex topography zones, data interpolation is not adequate and the use of Digital Terrain Model (DTM) information can be helpful. In this work, daily solar irradiation is estimated for a wide mountainous area using a combination of Meteosat satellite images and a DTM, with the advantage of avoiding the necessity of ground measurements. This methodology utilizes a modified Heliosat-2 model, and applies for all sky conditions; it also introduces a horizon calculation of the DTM points and accounts for the effect of snow covers. Model performance has been evaluated against data measured in 12 radiometric stations, with results in terms of the Root Mean Square Error (RMSE) of 10%, and a Mean Bias Error (MBE) of +2%, both expressed as a percentage of the mean value measured. (author)

  15. Estimation of Correlation Functions by Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    This paper illustrates how correlation functions can be estimated by the random decrement technique. Several different formulations of the random decrement technique, estimating the correlation functions are considered. The speed and accuracy of the different formulations of the random decrement...... and the length of the correlation functions. The accuracy of the estimates with respect to the theoretical correlation functions and the modal parameters are both investigated. The modal parameters are extracted from the correlation functions using the polyreference time domain technique....

  16. New Technique for TOC Estimation Based on Thermal Core Logging in Low-Permeable Formations (Bazhen fm.)

    Science.gov (United States)

    Popov, Evgeny; Popov, Yury; Spasennykh, Mikhail; Kozlova, Elena; Chekhonin, Evgeny; Zagranovskaya, Dzhuliya; Belenkaya, Irina; Alekseev, Aleksey

    2016-04-01

    A practical method of organic-rich intervals identifying within the low-permeable dispersive rocks based on thermal conductivity measurements along the core is presented. Non-destructive non-contact thermal core logging was performed with optical scanning technique on 4 685 full size core samples from 7 wells drilled in four low-permeable zones of the Bazhen formation (B.fm.) in the Western Siberia (Russia). The method employs continuous simultaneous measurements of rock anisotropy, volumetric heat capacity, thermal anisotropy coefficient and thermal heterogeneity factor along the cores allowing the high vertical resolution (of up to 1-2 mm). B.fm. rock matrix thermal conductivity was observed to be essentially stable within the range of 2.5-2.7 W/(m*K). However, stable matrix thermal conductivity along with the high thermal anisotropy coefficient is characteristic for B.fm. sediments due to the low rock porosity values. It is shown experimentally that thermal parameters measured relate linearly to organic richness rather than to porosity coefficient deviations. Thus, a new technique employing the transformation of the thermal conductivity profiles into continuous profiles of total organic carbon (TOC) values along the core was developed. Comparison of TOC values, estimated from the thermal conductivity values, with experimental pyrolytic TOC estimations of 665 samples from the cores using the Rock-Eval and HAWK instruments demonstrated high efficiency of the new technique for the organic rich intervals separation. The data obtained with the new technique are essential for the SR hydrocarbon generation potential, for basin and petroleum system modeling application, and estimation of hydrocarbon reserves. The method allows for the TOC richness to be accurately assessed using the thermal well logs. The research work was done with financial support of the Russian Ministry of Education and Science (unique identification number RFMEFI58114X0008).

  17. Qualitative performance comparison of reactivity estimation between the extended Kalman filter technique and the inverse point kinetic method

    International Nuclear Information System (INIS)

    Shimazu, Y.; Rooijen, W.F.G. van

    2014-01-01

    Highlights: • Estimation of the reactivity of nuclear reactor based on neutron flux measurements. • Comparison of the traditional method, and the new approach based on Extended Kalman Filtering (EKF). • Estimation accuracy depends on filter parameters, the selection of which is described in this paper. • The EKF algorithm is preferred if the signal to noise ratio is low (low flux situation). • The accuracy of the EKF depends on the ratio of the filter coefficients. - Abstract: The Extended Kalman Filtering (EKF) technique has been applied for estimation of subcriticality with a good noise filtering and accuracy. The Inverse Point Kinetic (IPK) method has also been widely used for reactivity estimation. The important parameters for the EKF estimation are the process noise covariance, and the measurement noise covariance. However the optimal selection is quite difficult. On the other hand, there is only one parameter in the IPK method, namely the time constant for the first order delay filter. Thus, the selection of this parameter is quite easy. Thus, it is required to give certain idea for the selection of which method should be selected and how to select the required parameters. From this point of view, a qualitative performance comparison is carried out

  18. Estimates of erosion on potato lands on krasnozems at Dorringo, NSW, using the caesium-137 technique

    International Nuclear Information System (INIS)

    Elliott, G.L.; Cole-Clark, B.E.

    1993-01-01

    Caesium-137 measurements have been made on soil samples taken from a grid pattern in a paddock used for three spring potato crops since 1966. Total erosion was estimated from these measurements and found to average 297 t ha -1 , equivalent to 98 t ha -1 per crop (allowing for erosion during the pasture phase). Comparative erosion estimates have been made from the results of single transect sampling in a paddock used for two potato crops and in one under permanent pasture. Results suggest erosion rates of 57 t ha -1 per crop in the former site and 0.09 t ha -1 year -1 in the latter site. An erosion rate of 100 t ha -1 per crop is at least 100 times the probable soil formation rate, implies an economic resource life of a maximum 600 years and involves a cost of lost nutrients of at least $3200 per hectare. These results strongly suggest a need to both develop and adopt land management practices which will substantially reduce both soil detachment and transport. 19 refs., 3 tabs., 8 figs

  19. A Data Analysis Technique to Estimate the Thermal Characteristics of a House

    Directory of Open Access Journals (Sweden)

    Seyed Amin Tabatabaei

    2017-09-01

    Full Text Available Almost one third of the energy is used in the residential sector, and space heating is the largest part of energy consumption in our houses. Knowledge about the thermal characteristics of a house can increase the awareness of homeowners about the options to save energy, for example by showing that there is room for improvement of the insulation level. However, calculating the exact value of these characteristics is not possible without precise thermal experiments. In this paper, we propose a method to automatically estimate two of the most important thermal characteristics of a house, i.e., the loss rate and the heat capacity, based on collected data about the temperature and gas usage. The method is evaluated with a data set that has been collected in a real-life case study. Although a ground truth is lacking, the analyses show that there is evidence that this method could provide a feasible way to estimate those values from the thermostat data. More detailed data about the houses in which the data was collected is required to draw stronger conclusions. We conclude that the proposed method is a promising way to add energy saving advice to smart thermostats.

  20. Estimating Horizontal Displacement between DEMs by Means of Particle Image Velocimetry Techniques

    Directory of Open Access Journals (Sweden)

    Juan F. Reinoso

    2015-12-01

    Full Text Available To date, digital terrain model (DTM accuracy has been studied almost exclusively by computing its height variable. However, the largely ignored horizontal component bears a great influence on the positional accuracy of certain linear features, e.g., in hydrological features. In an effort to fill this gap, we propose a means of measurement different from the geomatic approach, involving fluid mechanics (water and air flows or aerodynamics. The particle image velocimetry (PIV algorithm is proposed as an estimator of horizontal differences between digital elevation models (DEM in grid format. After applying a scale factor to the displacement estimated by the PIV algorithm, the mean error predicted is around one-seventh of the cell size of the DEM with the greatest spatial resolution, and around one-nineteenth of the cell size of the DEM with the least spatial resolution. Our methodology allows all kinds of DTMs to be compared once they are transformed into DEM format, while also allowing comparison of data from diverse capture methods, i.e., LiDAR versus photogrammetric data sources.

  1. Estimates of the non-market value of sea turtles in Tobago using stated preference techniques.

    Science.gov (United States)

    Cazabon-Mannette, Michelle; Schuhmann, Peter W; Hailey, Adrian; Horrocks, Julia

    2017-05-01

    Economic benefits are derived from sea turtle tourism all over the world. Sea turtles also add value to underwater recreation and convey non-use values. This study examines the non-market value of sea turtles in Tobago. We use a choice experiment to estimate the value of sea turtle encounters to recreational SCUBA divers and the contingent valuation method to estimate the value of sea turtles to international tourists. Results indicate that turtle encounters were the most important dive attribute among those examined. Divers are willing to pay over US$62 per two tank dive for the first turtle encounter. The mean WTP for turtle conservation among international visitors to Tobago was US$31.13 which reflects a significant non-use value associated with actions targeted at keeping sea turtles from going extinct. These results illustrate significant non-use and non-consumptive use value of sea turtles, and highlight the importance of sea turtle conservation efforts in Tobago and throughout the Caribbean region. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. The application of digital imaging techniques in the in vivo estimation of the body composition of pigs: a review

    International Nuclear Information System (INIS)

    Szabo, C.; Babinszky, L.; Verstegen, M.W.A.; Vangen, O.; Jansman, A.J.M.; Kanis, E.

    1999-01-01

    Calorimetry and comparative slaughter measurement are techniques widely used to measure chemical body composition of pigs, while dissection is the standard method to determine physical (tissue) composition of the body. The disadvantage of calorimetry is the small number of observations possible, while of comparative slaughter and dissection the fact that examinations can be made only once on the same pig. The non-invasive imaging techniques, such as real time ultrasound, computer tomography (CT) and magnetic resonance imaging (MRI) could constitute a valuable tool for the estimation of body composition performed in series on living animals. The aim of this paper was to compare these methods. Ultrasound equipment entails a relatively low cost and great mobility, but provides less information and lower accuracy about whole body composition compared to CT and MRI. For this reason the ultrasound technique will in the future most probably remain for field application. Computer tomography and MRI with standardized and verified application methods could provide a tool to substitute whole body analysis and physical dissection. With respect to the disadvantages of CT and MRI techniques, the expense and the lack of portability should be cited, and for these reasons it is most likely that in future such techniques will be applied only in research and breeding programs

  3. A new validation technique for estimations of body segment inertia tensors: Principal axes of inertia do matter.

    Science.gov (United States)

    Rossi, Marcel M; Alderson, Jacqueline; El-Sallam, Amar; Dowling, James; Reinbolt, Jeffrey; Donnelly, Cyril J

    2016-12-08

    The aims of this study were to: (i) establish a new criterion method to validate inertia tensor estimates by setting the experimental angular velocity data of an airborne objects as ground truth against simulations run with the estimated tensors, and (ii) test the sensitivity of the simulations to changes in the inertia tensor components. A rigid steel cylinder was covered with reflective kinematic markers and projected through a calibrated motion capture volume. Simulations of the airborne motion were run with two models, using inertia tensor estimated with geometric formula or the compound pendulum technique. The deviation angles between experimental (ground truth) and simulated angular velocity vectors and the root mean squared deviation angle were computed for every simulation. Monte Carlo analyses were performed to assess the sensitivity of simulations to changes in magnitude of principal moments of inertia within ±10% and to changes in orientation of principal axes of inertia within ±10° (of the geometric-based inertia tensor). Root mean squared deviation angles ranged between 2.9° and 4.3° for the inertia tensor estimated geometrically, and between 11.7° and 15.2° for the compound pendulum values. Errors up to 10% in magnitude of principal moments of inertia yielded root mean squared deviation angles ranging between 3.2° and 6.6°, and between 5.5° and 7.9° when lumped with errors of 10° in principal axes of inertia orientation. The proposed technique can effectively validate inertia tensors from novel estimation methods of body segment inertial parameter. Principal axes of inertia orientation should not be neglected when modelling human/animal mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A Simple Technique to Estimate the Flammability Index of Moroccan Forest Fuels

    Directory of Open Access Journals (Sweden)

    M'Hamed Hachmi

    2011-01-01

    Full Text Available A formula to estimate forest fuel flammability index (FI is proposed, integrating three species flammability parameters: time to ignition, time of combustion, and flame height. Thirty-one (31 Moroccan tree and shrub species were tested within a wide range of fuel moisture contents. Six species flammability classes were identified. An ANOVA of the FI-values was performed and analyzed using four different sample sizes of 12, 24, 36, and 50 flammability tests. Fuel humidity content is inversely correlated to the FI-value, and the linear model appears to be the most adequate equation that may predict the hypothetical threshold-point of humidity of extinction. Most of the Moroccan forest fuels studied are classified as moderately flammable to flammable species based on their average humidity content, calculated for the summer period from July to September.

  5. Estimation of sea level variations with GPS/GLONASS-reflectometry technique

    Science.gov (United States)

    Padokhin, A. M.; Kurbatov, G. A.; Andreeva, E. S.; Nesterov, I. A.; Nazarenko, M. O.; Berbeneva, N. A.; Karlysheva, A. V.

    2017-11-01

    In the present paper we study GNSS - reflectometry methods for estimation of sea level variations using a single GNSSreceiver, which are based on the multipath propagation effects caused by the reflection of navigational signals from the sea surface. Such multipath propagation results in the appearance of the interference pattern in the Signal-to-Noise Ratio (SNR) of GNSS signals at small satellite elevation angles, which parameters are determined by the wavelength of the navigational signal and height of the antenna phase center above the reflecting sea surface. In current work we used GPS and GLONASS signals and measurements at two working frequencies of both systems to study sea level variations which almost doubles the amount of observations compared to GPS-only tide gauge. For UNAVCO sc02 station and collocated Friday Harbor NOAA tide gauge we show good agreement between GNSS-reflectometry and traditional mareograph sea level data.

  6. Comparative methane estimation from cattle based on total CO2 production using different techniques

    Directory of Open Access Journals (Sweden)

    Md N. Haque

    2017-06-01

    Full Text Available The objective of this study was to compare the precision of CH4 estimates using calculated CO2 (HP by the CO2 method (CO2T and measured CO2 in the respiration chamber (CO2R. The CO2R and CO2T study was conducted as a 3 × 3 Latin square design where 3 Dexter heifers were allocated to metabolic cages for 3 periods. Each period consisted of 2 weeks of adaptation followed by 1 week of measurement with the CO2R and CO2T. The average body weight of the heifer was 226 ± 11 kg (means ± SD. They were fed a total mixed ration, twice daily, with 1 of 3 supplements: wheat (W, molasses (M, or molasses mixed with sodium bicarbonate (Mbic. The dry mater intake (DMI; kg/day was significantly greater (P < 0.001 in the metabolic cage compared with that in the respiration chamber. The daily CH4 (L/day emission was strongly correlated (r = 0.78 between CO2T and CO2R. The daily CH4 (L/kg DMI emission by the CO2T was in the same magnitude as by the CO2R. The measured CO2 (L/day production in the respiration chamber was not different (P = 0.39 from the calculated CO2 production using the CO2T. This result concludes a reasonable accuracy and precision of CH4 estimation by the CO2T compared with the CO2R.

  7. Estimation of transcapillary transport of palmitate by the multiple indicator dilution technique

    International Nuclear Information System (INIS)

    Little, S.E.; van der Vusse, G.J.; Bassingthwaighte, J.B.

    1986-01-01

    From the outflow concentration-time curves for 14 C-palmitate, intravascular ( 131 I-albumin) and extracellular ( 3 H-sucrose) tracers, palmitate extraction was estimated in rabbit hearts Langendorff-perfused at a constant flow with nonrecirculated palmitate-albumin Kreb's Ringer buffer. Contamination of 131 I-albumin with free 13 $ 1 I - (typically 1%) or aggregated albumin (typically 0.1 to 0.5%) greatly alters the shapes of the tails of the curves after 2 albumin transit times, vitiating accurate estimation of cellular permeability or reactions. Buffers were prepared by adding K + -palmitate (made using K 2 CO 3 ) to albumin solutions. The final concentrations (after dialysing twice and filtering through a 1.2 μ filter) of K + , HCO 3 , and CO 3 were 5.0 mM, 23.5 mM and 0.5 mM respectively, pH was between 7.35 and 7.40 for several hours. The bolus of tracers was prepared by mixing 131 I-albumin (dialysed to remove I - , and filtered through a 0.2 μM filter to remove aggregates), K + [U- 14 C]palmitate (high specific activity) and 3 H-sucrose. Before injection the radioactive bolus is preequilibrated with the perfusate at bolus:perfusate ratio of 1:10. Glacial acetic acid is added to the outflow samples to remove the 14 CO 2 which, if present in the sample, would be interpreted as increased palmitate back diffusion. The peak extractions of palmitate were about 40% at perfusate palmitate concentrations of 0.02 to 1.0 mM, 0.4 mM albumin, at a flow of 5 mlg -1 2] 1 , showing capillary permeability-surface area product to be roughly constant. This suggests either than transcapillary palmitate transport is passive or that a transporter interacts with the albumin-palmitate complex

  8. Software Development for Estimating the Conversion Factor (K-Factor) at Suitable Scan Areas, Relating the Dose Length Product to the Effective Dose.

    Science.gov (United States)

    Kobayashi, Masanao; Asada, Yasuki; Matsubara, Kosuke; Suzuki, Syouichi; Koshida, Kichiro; Matsunaga, Yuta; Kawaguchi, Ai; Haba, Tomonobu; Toyama, Hiroshi; Kato, Ryouichi

    2017-05-01

    We developed a k-factor-creator software (kFC) that provides the k-factor for CT examination in an arbitrary scan area. It provides the k-factor from the effective dose and dose-length product by Imaging Performance Assessment of CT scanners and CT-EXPO. To assess the reliability, we compared the kFC-evaluated k-factors with those of the International Commission on Radiological Protection (ICRP) publication 102. To confirm the utility, the effective dose determined by coronary computed tomographic angiography (CCTA) was evaluated by a phantom study and k-factor studies. In the CCTA, the effective doses were 5.28 mSv in the phantom study, 2.57 mSv (51%) in the k-factor of ICRP, and 5.26 mSv (1%) in the k-factor of the kFC. Effective doses can be determined from the kFC-evaluated k-factors in suitable scan areas. Therefore, we speculate that the flexible k-factor is useful in clinical practice, because CT examinations are performed in various scan regions. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Software development for estimating the conversion factor (k-factor) at suitable scan areas, relating the dose length product to the effective dose

    International Nuclear Information System (INIS)

    Kobayashi, Masanao; Asada, Yasuki; Suzuki, Syouichi; Kato, Ryouichi; Matsubara, Kosuke; Koshida, Kichiro; Matsunaga, Yuta; Kawaguchi, Ai; Haba, Tomonobu; Toyama, Hiroshi

    2017-01-01

    We developed a k-factor-creator software (kFC) that provides the k-factor for CT examination in an arbitrary scan area. It provides the k-factor from the effective dose and dose-length product by Imaging Performance Assessment of CT scanners and CT-EXPO. To assess the reliability, we compared the kFC-evaluated k-factors with those of the International Commission on Radiological Protection (ICRP) publication 102. To confirm the utility, the effective dose determined by coronary computed tomographic angiography (CCTA) was evaluated by a phantom study and k-factor studies. In the CCTA, the effective doses were 5.28 mSv in the phantom study, 2.57 mSv (51%) in the k-factor of ICRP, and 5.26 mSv (1%) in the k-factor of the kFC. Effective doses can be determined from the kFC-evaluated k-factors in suitable scan areas. Therefore, we speculate that the flexible k-factor is useful in clinical practice, because CT examinations are performed in various scan regions. (authors)

  10. Estimation of trace levels of plutonium in urine samples by fission track technique

    International Nuclear Information System (INIS)

    Sawant, P.D.; Prabhu, S.; Pendharkar, K.A.; Kalsi, P.C.

    2009-01-01

    Individual monitoring of radiation workers handling Pu in various nuclear installations requires the detection of trace levels of plutonium in bioassay samples. It is necessary to develop methods that can detect urinary excretion of Pu in fraction of mBq range. Therefore, a sensitive method such as fission track analysis has been developed for the measurement of trace levels of Pu in bioassay samples. In this technique, chemically separated plutonium from the sample and a Pu standard were electrodeposited on planchettes and covered with Lexan solid state nuclear track detector (SSNTD) and irradiated with thermal neutrons in APSARA reactor of Bhabha Atomic Research Centre, India. The fission track densities in the Lexan films of the sample and the standard were used to calculate the amount of Pu in the sample. The minimum amount of Pu that can be analyzed by this method using doubly distilled electronic grade (E. G.) reagents is about 12 μBq/L. (author)

  11. Use of adsorption and gas chromatographic techniques in estimating biodegradation of indigenous crude oils

    International Nuclear Information System (INIS)

    Kokub, D.; Allahi, A.; Shafeeq, M.; Khalid, Z.M.; Malik, K.A.; Hussain, A.

    1993-01-01

    Indigenous crude oils could be degraded and emulsified upto varying degree by locally isolated bacteria. Degradation and emulsification was found to be dependent upon the chemical composition of the crude oils. Tando Alum and Khashkheli crude oils were emulsified in 27 and 33 days of incubation respectively. While Joyamair crude oil and not emulsify even mainly due to high viscosity of this oil. Using adsorption chromatographic technique, oil from control (uninoculated) and bio degraded flasks was fractioned into the deasphaltened oil containing saturate, aromatic, NSO (nitrogen, sulphur, oxygen) containing hydrocarbons) and soluble asphaltenes. Saturate fractions from control and degraded oil were further analysed by gas liquid chromatography. From these analyses, it was observed that saturate fraction was preferentially utilized and the crude oils having greater contents of saturate fraction were better emulsified than those low in this fraction. Utilization of various fractions of crude oils was in the order saturate> aromatic> NSO. (author)

  12. A first look at roadheader construction and estimating techniques for site characterization at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Neil, D.M.; Taylor, D.L.

    1991-01-01

    The Yucca Mountain site characterization program will be based on mechanical excavation techniques for the mined repository construction and development. Tunnel Boring Machines (TBM's), Mobile Miners (MM), Raiseborers (RB), Blind Hole Shaft Boring Machines (BHSB), and Roadheaders (RH) have been selected as the mechanical excavation machines most suited to mine the densely welded and non-welded tuffs of the Topopah Springs and Calico Hills members. Heavy duty RH in the 70 to 100 ton class with 300 Kw cutter motors have been evaluated and formulas developed to predict machine performance based on the rock physical properties and the results of Linear Cutting Machine (LCM) tests done at the Colorado School of Mines (CSM) for Sandia National Labs. (SNL)

  13. Photometric estimation of plutonium in product solutions and acid waste solutions using flow injection analysis technique

    International Nuclear Information System (INIS)

    Dhas, A.J.A.; Dharmapurikar, G.R.; Kumaraguru, K.; Vijayan, K.; Kapoor, S.C.; Ramanujam, A.

    1995-01-01

    Flow injection analysis technique is employed for the measurement of plutonium concentrations in product nitrate solutions by measuring the absorbance of Pu(III) at 565 nm and of Pu(IV) at 470 nm, using a Metrohm 662 photometer, with a pyrex glass tube of 2 nm (ID) inserted in the light path of the detector serving as a flow cell. The photometer detector never comes in contact with radioactive solution. In the case of acid waste solutions Pu is first purified by extraction chromatography with 2-ethyl hexyl hydrogen 2 ethyl hexyl phosphonate (KSM 17)- chromosorb and the Pu in the eluate in complexed with Arsenazo III followed by the measured of absorbance at 665 nm. Absorbance of reference solutions in the desired concentration ranges are measured to calibrate the system. The results obtained agree with the reference values within ±2.0%. (author). 3 refs., 1 tab

  14. The Technique for the Numerical Tolerances Estimations in the Construction of Compensated Accelerating Structures

    CERN Document Server

    Paramonov, V V

    2004-01-01

    The requirements to the cells manufacturing precision and tining in the multi-cells accelerating structures construction came from the required accelerating field uniformity, based on the beam dynamics demands. The standard deviation of the field distribution depends on accelerating and coupling modes frequencies deviations, stop-band width and coupling coefficient deviations. These deviations can be determined from 3D fields distribution for accelerating and coupling modes and the cells surface displacements. With modern software it can be done separately for every specified part of the cell surface. Finally, the cell surface displacements are defined from the cell dimensions deviations. This technique allows both to define qualitatively the critical regions and to optimize quantitatively the tolerances definition.

  15. Aspergillus specific IgE estimation by radioallergosorbent technique (RAST) in obstructive airways disease at Agra

    International Nuclear Information System (INIS)

    Sharma, S.K.; Singh, R.; Mehrotra, M.P.; Patney, N.L.; Sachan, A.S.; Shiromany, A.

    1986-01-01

    The radioallergosorbent technique (RAST) was used to measure the levels of Aspergillus specific IgE in 25 normal controls, 25 cases of extrinsic bronchial asthma and 25 cases of allergic broncho-pulmonary aspergillosis with a view to study the clinical role and its correlation with sputum culture, skin sensitivity and severity of airways obstruction. The test was performed using Pharmacia diagnostic kits with antigen derived from Aspergillus fumigatus. Abnormal levels of Aspergillus specific IgE were observed in 84 per cent cases of bronchial asthma but none of the controls. 86.7 per cent of all cases with positive skin test had positive radioallergosorbent test and there was no false positive reaction. There was a positive correlation of Aspergillus specific IgE with skin test positivity and with FEV 1 /FVC per cent. (author)

  16. Comparison of internal radiation doses estimated by MIRD and voxel techniques for a ''family'' of phantoms

    International Nuclear Information System (INIS)

    Smith, T.

    2000-01-01

    The aim of this study was to use a new system of realistic voxel phantoms, based on computed tomography scanning of humans, to assess its ability to specify the internal dosimetry of selected human examples in comparison with the well-established MIRD system of mathematical anthropomorphic phantoms. Differences in specific absorbed fractions between the two systems were inferred by using organ dose estimates as the end point for comparison. A ''family'' of voxel phantoms, comprising an 8-week-old baby, a 7-year-old child and a 38-year-old adult, was used and a close match to these was made by interpolating between organ doses estimated for pairs of the series of six MIRD phantoms. Using both systems, doses were calculated for up to 22 organs for four radiopharmaceuticals with widely differing biodistribution and emission characteristics (technetium-99m pertechnetate, administered without thyroid blocking; iodine-123 iodide; indium-111 antimyosin; oxygen-15 water). Organ dose estimates under the MIRD system were derived using the software MIRDOSE 3, which incorporates specific absorbed fraction (SAF) values for the MIRD phantom series. The voxel system uses software based on the same dose calculation formula in conjunction with SAF values determined by Monte Carlo analysis at the GSF of the three voxel phantoms. Effective doses were also compared. Substantial differences in organ weights were observed between the two systems, 18% differing by more than a factor of 2. Out of a total of 238 organ dose comparisons, 5% differed by more than a factor of 2 between the systems; these included some doses to walls of the GI tract, a significant result in relation to their high tissue weighting factors. Some of the largest differences in dose were associated with organs of lower significance in terms of radiosensitivity (e.g. thymus). In this small series, voxel organ doses tended to exceed MIRD values, on average, and a 10% difference was significant when all 238 organ doses

  17. Techniques and software tools for estimating ultrasonic signal-to-noise ratios

    Science.gov (United States)

    Chiou, Chien-Ping; Margetan, Frank J.; McKillip, Matthew; Engle, Brady J.; Roberts, Ronald A.

    2016-02-01

    At Iowa State University's Center for Nondestructive Evaluation (ISU CNDE), the use of models to simulate ultrasonic inspections has played a key role in R&D efforts for over 30 years. To this end a series of wave propagation models, flaw response models, and microstructural backscatter models have been developed to address inspection problems of interest. One use of the combined models is the estimation of signal-to-noise ratios (S/N) in circumstances where backscatter from the microstructure (grain noise) acts to mask sonic echoes from internal defects. Such S/N models have been used in the past to address questions of inspection optimization and reliability. Under the sponsorship of the National Science Foundation's Industry/University Cooperative Research Center at ISU, an effort was recently initiated to improve existing research-grade software by adding graphical user interface (GUI) to become user friendly tools for the rapid estimation of S/N for ultrasonic inspections of metals. The software combines: (1) a Python-based GUI for specifying an inspection scenario and displaying results; and (2) a Fortran-based engine for computing defect signal and backscattered grain noise characteristics. The latter makes use of several models including: the Multi-Gaussian Beam Model for computing sonic fields radiated by commercial transducers; the Thompson-Gray Model for the response from an internal defect; the Independent Scatterer Model for backscattered grain noise; and the Stanke-Kino Unified Model for attenuation. The initial emphasis was on reformulating the research-grade code into a suitable modular form, adding the graphical user interface and performing computations rapidly and robustly. Thus the initial inspection problem being addressed is relatively simple. A normal-incidence pulse/echo immersion inspection is simulated for a curved metal component having a non-uniform microstructure, specifically an equiaxed, untextured microstructure in which the average

  18. The advantages, and challenges, in using multiple techniques in the estimation of surface water-groundwater fluxes.

    Science.gov (United States)

    Shanafield, M.; Cook, P. G.

    2014-12-01

    When estimating surface water-groundwater fluxes, the use of complimentary techniques helps to fill in uncertainties in any individual method, and to potentially gain a better understanding of spatial and temporal variability in a system. It can also be a way of preventing the loss of data during infrequent and unpredictable flow events. For example, much of arid Australia relies on groundwater, which is recharged by streamflow through ephemeral streams during flood events. Three recent surface water/groundwater investigations from arid Australian systems provide good examples of how using multiple field and analysis techniques can help to more fully characterize surface water-groundwater fluxes, but can also result in conflicting values over varying spatial and temporal scales. In the Pilbara region of Western Australia, combining streambed radon measurements, vertical heat transport modeling, and a tracer test helped constrain very low streambed residence times, which are on the order of minutes. Spatial and temporal variability between the methods yielded hyporheic exchange estimates between 10-4 m2 s-1 and 4.2 x 10-2 m2 s-1. In South Australia, three-dimensional heat transport modeling captured heterogeneity within 20 square meters of streambed, identifying areas of sandy soil (flux rates of up to 3 m d-1) and clay (flux rates too slow to be accurately characterized). Streamflow front modeling showed similar flux rates, but averaged over 100 m long stream segments for a 1.6 km reach. Finally, in central Australia, several methods are used to decipher whether any of the flow down a highly ephemeral river contributes to regional groundwater recharge, showing that evaporation and evapotranspiration likely accounts for all of the infiltration into the perched aquifer. Lessons learned from these examples demonstrate the influences of the spatial and temporal variability between techniques on estimated fluxes.

  19. The importance of record length in estimating the magnitude of climatic changes: an example using 175 years of lake ice-out dates in New England

    Science.gov (United States)

    Hodgkins, Glenn A.

    2013-01-01

    Many studies have shown that lake ice-out (break-up) dates in the Northern Hemisphere are useful indicators of late winter/early spring climate change. Trends in lake ice-out dates in New England, USA, were analyzed for 25, 50, 75, 100, 125, 150, and 175 year periods ending in 2008. More than 100 years of ice-out data were available for 19 of the 28 lakes in this study. The magnitude of trends over time depends on the length of the period considered. For the recent 25-year period, there was a mix of earlier and later ice-out dates. Lake ice-outs during the last 50 years became earlier by 1.8 days/decade (median change for all lakes with adequate data). This is a much higher rate than for longer historical periods; ice-outs became earlier by 0.6 days/decade during the last 75 years, 0.4 days/ decade during the last 100 years, and 0.6 days/decade during the last 125 years. The significance of trends was assessed under the assumption of serial independence of historical ice-out dates and under the assumption of short and long term persistence. Hypolimnion dissolved oxygen (DO) levels are an important factor in lake eutrophication and coldwater fish survival. Based on historical data available at three lakes, 32 to 46 % of the interannual variability of late summer hypolimnion DO levels was related to ice-out dates; earlier ice-outs were associated with lower DO levels.

  20. Using Convective Stratiform Technique (CST) method to estimate rainfall (case study in Bali, December 14th 2016)

    Science.gov (United States)

    Vista Wulandari, Ayu; Rizki Pratama, Khafid; Ismail, Prayoga

    2018-05-01

    Accurate and realtime data in wide spatial space at this time is still a problem because of the unavailability of observation of rainfall in each region. Weather satellites have a very wide range of observations and can be used to determine rainfall variability with better resolution compared with a limited direct observation. Utilization of Himawari-8 satellite data in estimating rainfall using Convective Stratiform Technique (CST) method. The CST method is performed by separating convective and stratiform cloud components using infrared channel satellite data. Cloud components are classified by slope because the physical and dynamic growth processes are very different. This research was conducted in Bali area on December 14, 2016 by verifying the result of CST process with rainfall data from Ngurah Rai Meteorology Station Bali. It is found that CST method result had simililar value with data observation in Ngurah Rai meteorological station, so it assumed that CST method can be used for rainfall estimation in Bali region.

  1. Application of fission track technique for estimation of uranium concentration in drinking waters of Punjab

    International Nuclear Information System (INIS)

    Prabhu, S.P.; Raj, Sanu S.; Sawant, P.D.; Kumar, Ajay; Sarkar, P.K.; Tripathi, R.M.

    2010-01-01

    Full text: Drinking water samples were collected from four different districts, namely Bhatinda, Mansa, Faridkot and Firozpur, of Punjab for ascertaining the U(nat.) concentrations. The samples were collected from bore wells, hand pumps, tube wells and treated municipal water supply. All these samples (235 nos.) collected were preserved and processed by following the international standard protocol and analyzed by Laser Fluorimetry. Results of analysis by laser fluorimetry have been already reported. To ensure accuracy of the data obtained by laser fluorimetry, few samples (10 nos) from each district were analyzed by alpha spectrometry as well as by fission track analysis (FTA) technique. FTA in solution media for uranium has been already standardized in Bioassay laboratory of Health Physics Division. Few of drinking water sample was directly transferred to polythene tube sealed at one end. Lexan detector with proper identification mark was immersed in the samples and the other open end of the tube was also heat-sealed. Two tubes containing samples and one containing uranium standard (80 ppb) were irradiated in the Pneumatic Carrier Facility (PCF) of DHRUVA reactor. The Lexan detectors were then chemically etched and tracks were counted under an optical microscope at 400X magnification. Concentration of uranium in sample was determined by comparison technique. Quality assurance was carried out by replicate analysis and by analysis of standard reference materials. Uranium concentration in these samples ranged from 3.2 to 60.5 ppb with an average of 28.8 ppb. A t-test analysis for paired data was done to compare the results obtained by FTA and those obtained by laser fluorimeter. The calculated value for t is -1.19, which is greater than the tabulated value of t for 40 observations (-2.02 at 95% confidence level). This shows that the results of the measurements carried out by the FTA and laser fluorimetry are not significantly different. The preliminary studies

  2. Accuracy in estimation of timber assortments and stem distribution - A comparison of airborne and terrestrial laser scanning techniques

    Science.gov (United States)

    Kankare, Ville; Vauhkonen, Jari; Tanhuanpää, Topi; Holopainen, Markus; Vastaranta, Mikko; Joensuu, Marianna; Krooks, Anssi; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri; Viitala, Risto

    2014-11-01

    Detailed information about timber assortments and diameter distributions is required in forest management. Forest owners can make better decisions concerning the timing of timber sales and forest companies can utilize more detailed information to optimize their wood supply chain from forest to factory. The objective here was to compare the accuracies of high-density laser scanning techniques for the estimation of tree-level diameter distribution and timber assortments. We also introduce a method that utilizes a combination of airborne and terrestrial laser scanning in timber assortment estimation. The study was conducted in Evo, Finland. Harvester measurements were used as a reference for 144 trees within a single clear-cut stand. The results showed that accurate tree-level timber assortments and diameter distributions can be obtained, using terrestrial laser scanning (TLS) or a combination of TLS and airborne laser scanning (ALS). Saw log volumes were estimated with higher accuracy than pulpwood volumes. The saw log volumes were estimated with relative root-mean-squared errors of 17.5% and 16.8% with TLS and a combination of TLS and ALS, respectively. The respective accuracies for pulpwood were 60.1% and 59.3%. The differences in the bucking method used also caused some large errors. In addition, tree quality factors highly affected the bucking accuracy, especially with pulpwood volume.

  3. Optimal Design for Reactivity Ratio Estimation: A Comparison of Techniques for AMPS/Acrylamide and AMPS/Acrylic Acid Copolymerizations

    Directory of Open Access Journals (Sweden)

    Alison J. Scott

    2015-11-01

    Full Text Available Water-soluble polymers of acrylamide (AAm and acrylic acid (AAc have significant potential in enhanced oil recovery, as well as in other specialty applications. To improve the shear strength of the polymer, a third comonomer, 2-acrylamido-2-methylpropane sulfonic acid (AMPS, can be added to the pre-polymerization mixture. Copolymerization kinetics of AAm/AAc are well studied, but little is known about the other comonomer pairs (AMPS/AAm and AMPS/AAc. Hence, reactivity ratios for AMPS/AAm and AMPS/AAc copolymerization must be established first. A key aspect in the estimation of reliable reactivity ratios is design of experiments, which minimizes the number of experiments and provides increased information content (resulting in more precise parameter estimates. However, design of experiments is hardly ever used during copolymerization parameter estimation schemes. In the current work, copolymerization experiments for both AMPS/AAm and AMPS/AAc are designed using two optimal techniques (Tidwell-Mortimer and the error-in-variables-model (EVM. From these optimally designed experiments, accurate reactivity ratio estimates are determined for AMPS/AAm (rAMPS = 0.18, rAAm = 0.85 and AMPS/AAc (rAMPS = 0.19, rAAc = 0.86.

  4. A positional estimation technique for an autonomous land vehicle in an unstructured environment

    Science.gov (United States)

    Talluri, Raj; Aggarwal, J. K.

    1990-01-01

    This paper presents a solution to the positional estimation problem of an autonomous land vehicle navigating in an unstructured mountainous terrain. A Digital Elevation Map (DEM) of the area in which the robot is to navigate is assumed to be given. It is also assumed that the robot is equipped with a camera that can be panned and tilted, and a device to measure the elevation of the robot above the ground surface. No recognizable landmarks are assumed to be present in the environment in which the robot is to navigate. The solution presented makes use of the DEM information, and structures the problem as a heuristic search in the DEM for the possible robot location. The shape and position of the horizon line in the image plane and the known camera geometry of the perspective projection are used as parameters to search the DEM. Various heuristics drawn from the geometric constraints are used to prune the search space significantly. The algorithm is made robust to errors in the imaging process by accounting for the worst care errors. The approach is tested using DEM data of areas in Colorado and Texas. The method is suitable for use in outdoor mobile robots and planetary rovers.

  5. Techniques for estimating flood-depth frequency relations for streams in West Virginia

    Science.gov (United States)

    Wiley, J.B.

    1987-01-01

    Multiple regression analyses are applied to data from 119 U.S. Geological Survey streamflow stations to develop equations that estimate baseline depth (depth of 50% flow duration) and 100-yr flood depth on unregulated streams in West Virginia. Drainage basin characteristics determined from the 100-yr flood depth analysis were used to develop 2-, 10-, 25-, 50-, and 500-yr regional flood depth equations. Two regions with distinct baseline depth equations and three regions with distinct flood depth equations are delineated. Drainage area is the most significant independent variable found in the central and northern areas of the state where mean basin elevation also is significant. The equations are applicable to any unregulated site in West Virginia where values of independent variables are within the range evaluated for the region. Examples of inapplicable sites include those in reaches below dams, within and directly upstream from bridge or culvert constrictions, within encroached reaches, in karst areas, and where streams flow through lakes or swamps. (Author 's abstract)

  6. Estimating spatio-temporal dynamics of stream total phosphate concentration by soft computing techniques.

    Science.gov (United States)

    Chang, Fi-John; Chen, Pin-An; Chang, Li-Chiu; Tsai, Yu-Hsuan

    2016-08-15

    This study attempts to model the spatio-temporal dynamics of total phosphate (TP) concentrations along a river for effective hydro-environmental management. We propose a systematical modeling scheme (SMS), which is an ingenious modeling process equipped with a dynamic neural network and three refined statistical methods, for reliably predicting the TP concentrations along a river simultaneously. Two different types of artificial neural network (BPNN-static neural network; NARX network-dynamic neural network) are constructed in modeling the dynamic system. The Dahan River in Taiwan is used as a study case, where ten-year seasonal water quality data collected at seven monitoring stations along the river are used for model training and validation. Results demonstrate that the NARX network can suitably capture the important dynamic features and remarkably outperforms the BPNN model, and the SMS can effectively identify key input factors, suitably overcome data scarcity, significantly increase model reliability, satisfactorily estimate site-specific TP concentration at seven monitoring stations simultaneously, and adequately reconstruct seasonal TP data into a monthly scale. The proposed SMS can reliably model the dynamic spatio-temporal water pollution variation in a river system for missing, hazardous or costly data of interest. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Biological Inspired Stochastic Optimization Technique (PSO for DOA and Amplitude Estimation of Antenna Arrays Signal Processing in RADAR Communication System

    Directory of Open Access Journals (Sweden)

    Khurram Hammed

    2016-01-01

    Full Text Available This paper presents a stochastic global optimization technique known as Particle Swarm Optimization (PSO for joint estimation of amplitude and direction of arrival of the targets in RADAR communication system. The proposed scheme is an excellent optimization methodology and a promising approach for solving the DOA problems in communication systems. Moreover, PSO is quite suitable for real time scenario and easy to implement in hardware. In this study, uniform linear array is used and targets are supposed to be in far field of the arrays. Formulation of the fitness function is based on mean square error and this function requires a single snapshot to obtain the best possible solution. To check the accuracy of the algorithm, all of the results are taken by varying the number of antenna elements and targets. Finally, these results are compared with existing heuristic techniques to show the accuracy of PSO.

  8. Different techniques of excess 210Pb for sedimentation rate estimation in the Sarawak and Sabah coastal waters

    International Nuclear Information System (INIS)

    Zal Uyun Wan Mahmood; Zaharudin Ahmad; Abdul Kadir Ishak; Che Abdul Rahim Mohamed

    2010-01-01

    Sediment core samples were collected at eight stations in the Sarawak and Sabah coastal waters using a gravity box corer to estimate sedimentation rates based on the activity of excess 210 Pb. The sedimentation rates derived from four mathematical models of CIC, Shukla-CIC, CRS and ADE were generally shown in good agreement with similar or comparable value at all stations. However, based on statistical analysis of independent sample t-test indicated that Shukla-CIC model was the most accurate, reliable and suitable technique to determine the sedimentation rate in the study area. (author)

  9. Comparisons and Uncertainty in Fat and Adipose Tissue Estimation Techniques: The Northern Elephant Seal as a Case Study.

    Directory of Open Access Journals (Sweden)

    Lisa K Schwarz

    Full Text Available Fat mass and body condition are important metrics in bioenergetics and physiological studies. They can also link foraging success with demographic rates, making them key components of models that predict population-level outcomes of environmental change. Therefore, it is important to incorporate uncertainty in physiological indicators if results will lead to species management decisions. Maternal fat mass in elephant seals (Mirounga spp can predict reproductive rate and pup survival, but no one has quantified or identified the sources of uncertainty for the two fat mass estimation techniques (labeled-water and truncated cones. The current cones method can provide estimates of proportion adipose tissue in adult females and proportion fat of juveniles in northern elephant seals (M. angustirostris comparable to labeled-water methods, but it does not work for all cases or species. We reviewed components and assumptions of the technique via measurements of seven early-molt and seven late-molt adult females. We show that seals are elliptical on land, rather than the assumed circular shape, and skin may account for a high proportion of what is often defined as blubber. Also, blubber extends past the neck-to-pelvis region, and comparisons of new and old ultrasound instrumentation indicate previous measurements of sculp thickness may be biased low. Accounting for such differences, and incorporating new measurements of blubber density and proportion of fat in blubber, we propose a modified cones method that can isolate blubber from non-blubber adipose tissue and separate fat into skin, blubber, and core compartments. Lastly, we found that adipose tissue and fat estimates using tritiated water may be biased high during the early molt. Both the tritiated water and modified cones methods had high, but reducible, uncertainty. The improved cones method for estimating body condition allows for more accurate quantification of the various tissue masses and may

  10. Life management of Zr 2.5% Nb pressure tube through estimation of fracture properties by cyclic ball indentation technique

    International Nuclear Information System (INIS)

    Chatterjee, S.; Madhusoodanan, K.; Rama Rao, A.

    2015-01-01

    In Pressurised Heavy Water Reactors (PHWRs) fuel bundles are located inside horizontal pressure tubes. Pressure tubes made up of Zr 2.5 wt% Nb undergo degradation during in-service environmental conditions. Measurement of mechanical properties of degraded pressure tubes is important for assessing its fitness for further service in the reactor. The only way to accomplish this important objective is to develop a system based on insitu measurement technique. Considering the importance of such measurement, an In-situ Property Measurement System (IProMS) based on cyclic ball indentation technique has been designed and developed indigenously. The remotely operable system is capable of carrying out indentation trial on the inside surface of the pressure tube and to estimate important mechanical properties like yield strength, ultimate tensile strength, hardness etc. It is known that fracture toughness is one of the important life limiting parameters of the pressure tube. Hence, five spool pieces of Zr 2.5 wt% Nb pressure tube of different mechanical properties have been used for estimation of fracture toughness by ball indentation method. Curved Compact Tension (CCT) specimens were also prepared from the five spool pieces for measurement of fracture toughness from conventional tests. The conventional fracture toughness values were used as reference data. A methodology has been developed to estimate the fracture properties of Zr 2.5 wt% Nb pressure tube material from the analysis of the ball indentation test data. This paper highlights the comparison between tensile properties measured from conventional tests and IProMS trials and relates the fracture toughness parameters measured from conventional tests with the IProMS estimated fracture properties like Indentation Energy to Fracture. (author)

  11. On advanced estimation techniques for exoplanet detection and characterization using ground-based coronagraphs

    Science.gov (United States)

    Lawson, Peter R.; Poyneer, Lisa; Barrett, Harrison; Frazin, Richard; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gładysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jérôme; Marois, Christian; Mawet, Dimitri; Mouillet, David; Mugnier, Laurent; Pearson, Iain; Perrin, Marshall; Pueyo, Laurent; Savransky, Dmitry

    2012-07-01

    The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We propose a formal comparison of techniques using a blind data challenge with an evaluation of performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012.

  12. A Comprehensive Review on Water Quality Parameters Estimation Using Remote Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Mohammad Haji Gholizadeh

    2016-08-01

    Full Text Available Remotely sensed data can reinforce the abilities of water resources researchers and decision makers to monitor waterbodies more effectively. Remote sensing techniques have been widely used to measure the qualitative parameters of waterbodies (i.e., suspended sediments, colored dissolved organic matter (CDOM, chlorophyll-a, and pollutants. A large number of different sensors on board various satellites and other platforms, such as airplanes, are currently used to measure the amount of radiation at different wavelengths reflected from the water’s surface. In this review paper, various properties (spectral, spatial and temporal, etc. of the more commonly employed spaceborne and airborne sensors are tabulated to be used as a sensor selection guide. Furthermore, this paper investigates the commonly used approaches and sensors employed in evaluating and quantifying the eleven water quality parameters. The parameters include: chlorophyll-a (chl-a, colored dissolved organic matters (CDOM, Secchi disk depth (SDD, turbidity, total suspended sediments (TSS, water temperature (WT, total phosphorus (TP, sea surface salinity (SSS, dissolved oxygen (DO, biochemical oxygen demand (BOD and chemical oxygen demand (COD.

  13. Estimation of fracture aperture using simulation technique; Simulation wo mochiita fracture kaiko haba no suitei

    Energy Technology Data Exchange (ETDEWEB)

    Kikuchi, T [Geological Survey of Japan, Tsukuba (Japan); Abe, M [Tohoku University, Sendai (Japan). Faculty of Engineering

    1996-10-01

    Characteristics of amplitude variation around fractures have been investigated using simulation technique in the case changing the fracture aperture. Four models were used. The model-1 was a fracture model having a horizontal fracture at Z=0. For the model-2, the fracture was replaced by a group of small fractures. The model-3 had an extended borehole diameter at Z=0 in a shape of wedge. The model-4 had a low velocity layer at Z=0. The maximum amplitude was compared each other for each depth and for each model. For the model-1, the amplitude became larger at the depth of the fracture, and became smaller above the fracture. For the model-2, when the cross width D increased to 4 cm, the amplitude approached to that of the model-1. For the model-3 having extended borehole diameter, when the extension of borehole diameter ranged between 1 cm and 2 cm, the change of amplitude was hardly observed above and below the fracture. However, when the extension of borehole diameter was 4 cm, the amplitude became smaller above the extension part of borehole. 3 refs., 4 figs., 1 tab.

  14. Soil Erosion Estimation Using Remote Sensing Techniques in Wadi Yalamlam Basin, Saudi Arabia

    Directory of Open Access Journals (Sweden)

    Jarbou A. Bahrawi

    2016-01-01

    Full Text Available Soil erosion is one of the major environmental problems in terms of soil degradation in Saudi Arabia. Soil erosion leads to significant on- and off-site impacts such as significant decrease in the productive capacity of the land and sedimentation. The key aspects influencing the quantity of soil erosion mainly rely on the vegetation cover, topography, soil type, and climate. This research studies the quantification of soil erosion under different levels of data availability in Wadi Yalamlam. Remote Sensing (RS and Geographic Information Systems (GIS techniques have been implemented for the assessment of the data, applying the Revised Universal Soil Loss Equation (RUSLE for the calculation of the risk of erosion. Thirty-four soil samples were randomly selected for the calculation of the erodibility factor, based on calculating the K-factor values derived from soil property surfaces after interpolating soil sampling points. Soil erosion risk map was reclassified into five erosion risk classes and 19.3% of the Wadi Yalamlam is under very severe risk (37,740 ha. GIS and RS proved to be powerful instruments for mapping soil erosion risk, providing sufficient tools for the analytical part of this research. The mapping results certified the role of RUSLE as a decision support tool.

  15. A helium-3 proportional counter technique for estimating fast and intermediate neutrons

    International Nuclear Information System (INIS)

    Kosako, Toshiso; Nakazawa, Masaharu; Sekiguchi, Akira; Wakabayashi, Hiroaki.

    1976-11-01

    3 He proportional counter was employed to determine the fast and intermediate neutron spectra of wide energy region. The mixed gas ( 3 He, Kr) type counter response and the spectrum unfolding code were prepared and applied to some neutron fields. The counter response calculation was performed by using the Monte Carlo code, paying regards to dealing of the particle range calculation of the mixed gas. An experiment was carried out by using the van de Graaff accelerator to check the response function. The spectrum unfolding code was prepared so that it may have the function of automatic evaluation of the higher energy spectrum's effect to the pulse hight distribution of the lower energy region. The neutron spectra of the various neutron fields were measured and compared with the calculations such as the discrete ordinate Sn calculations. It became clear that the technique developed here can be applied to the practical use in the neutron energy range from about 150 KeV to 5 MeV. (auth.)

  16. Improved seismic risk estimation for Bucharest, based on multiple hazard scenarios, analytical methods and new techniques

    Science.gov (United States)

    Toma-Danila, Dragos; Florinela Manea, Elena; Ortanza Cioflan, Carmen

    2014-05-01

    Bucharest, capital of Romania (with 1678000 inhabitants in 2011), is one of the most exposed big cities in Europe to seismic damage. The major earthquakes affecting the city have their origin in the Vrancea region. The Vrancea intermediate-depth source generates, statistically, 2-3 shocks with moment magnitude >7.0 per century. Although the focal distance is greater than 170 km, the historical records (from the 1838, 1894, 1908, 1940 and 1977 events) reveal severe effects in the Bucharest area, e.g. intensities IX (MSK) for the case of 1940 event. During the 1977 earthquake, 1420 people were killed and 33 large buildings collapsed. The nowadays building stock is vulnerable both due to construction (material, age) and soil conditions (high amplification, generated within the weak consolidated Quaternary deposits, their thickness is varying 250-500m throughout the city). A number of 373 old buildings, out of 2563, evaluated by experts are more likely to experience severe damage/collapse in the next major earthquake. The total number of residential buildings, in 2011, was 113900. In order to guide the mitigation measures, different studies tried to estimate the seismic risk of Bucharest, in terms of buildings, population or economic damage probability. Unfortunately, most of them were based on incomplete sets of data, whether regarding the hazard or the building stock in detail. However, during the DACEA Project, the National Institute for Earth Physics, together with the Technical University of Civil Engineering Bucharest and NORSAR Institute managed to compile a database for buildings in southern Romania (according to the 1999 census), with 48 associated capacity and fragility curves. Until now, the developed real-time estimation system was not implemented for Bucharest. This paper presents more than an adaptation of this system to Bucharest; first, we analyze the previous seismic risk studies, from a SWOT perspective. This reveals that most of the studies don't use

  17. A voxel-based technique to estimate the volume of trees from terrestrial laser scanner data

    Science.gov (United States)

    Bienert, A.; Hess, C.; Maas, H.-G.; von Oheimb, G.

    2014-06-01

    The precise determination of the volume of standing trees is very important for ecological and economical considerations in forestry. If terrestrial laser scanner data are available, a simple approach for volume determination is given by allocating points into a voxel structure and subsequently counting the filled voxels. Generally, this method will overestimate the volume. The paper presents an improved algorithm to estimate the wood volume of trees using a voxel-based method which will correct for the overestimation. After voxel space transformation, each voxel which contains points is reduced to the volume of its surrounding bounding box. In a next step, occluded (inner stem) voxels are identified by a neighbourhood analysis sweeping in the X and Y direction of each filled voxel. Finally, the wood volume of the tree is composed by the sum of the bounding box volumes of the outer voxels and the volume of all occluded inner voxels. Scan data sets from several young Norway maple trees (Acer platanoides) were used to analyse the algorithm. Therefore, the scanned trees as well as their representing point clouds were separated in different components (stem, branches) to make a meaningful comparison. Two reference measurements were performed for validation: A direct wood volume measurement by placing the tree components into a water tank, and a frustum calculation of small trunk segments by measuring the radii along the trunk. Overall, the results show slightly underestimated volumes (-0.3% for a probe of 13 trees) with a RMSE of 11.6% for the individual tree volume calculated with the new approach.

  18. A preliminary study on sedimentation rate in Tasek Bera Lake estimated using Pb-210 dating technique

    International Nuclear Information System (INIS)

    Wan Zakaria Wan Muhamad Tahir; Johari Abdul Latif; Juhari Mohd Yusof; Kamaruzaman Mamat; Gharibreza, M.R.

    2010-01-01

    Tasek Bera is the largest natural lake system (60 ha) in Malaysia located in southwest Pahang. The lake is a complex dendritic system consisting of extensive peat-swamp forests. The catchment was originally lowland dipterocarp forest, but this has nearly over the past four decades been largely replaced with oil palm and rubber plantations developed by the Federal Land Development Authority (FELDA). Besides the environmentally importance of Tasek Bera, it is seriously subjected to erosion, sedimentation and morphological changes. Knowledge and information of accurate sedimentation rate and its causes are of utmost importance for appropriate management of lakes and future planning. In the present study, environmental 210 Pb (natural) dating technique was applied to determine sedimentation rate and pattern as well as the chronology of sediment deposit in Tasek Bera Lake. Three undisturbed core samples from different locations at the main entry and exit points of river mouth and in open water within the lake were collected during a field sampling campaign in October 2009 and analyzed for 210 Pb using gamma spectrometry method. Undisturbed sediments are classified as organic soils to peat with clayey texture that composed of 93 % clay, 5 % silt, and 2 % very fine sand. Comparatively higher sedimentation rates in the entry (0.06-1.58 cm/ yr) and exit (0.05-1.55 cm/ yr) points of the main river mouth as compared to the lakes open water (0.02- 0.74 cm/ yr) were noticed. Reasons for the different pattern of sedimentation rates in this lake and conclusion are discussed in this paper. (author)

  19. A comparative analysis of spectral exponent estimation techniques for 1/f(β) processes with applications to the analysis of stride interval time series.

    Science.gov (United States)

    Schaefer, Alexander; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin

    2014-01-30

    The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f)=1/f(β). The scaling exponent β is thus often interpreted as a "biomarker" of relative health and decline. This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Estimating rumen microbial protein supply for indigenous ruminants using nuclear and purine excretion techniques in Indonesia

    International Nuclear Information System (INIS)

    Soejono, M.; Yusiati, L.M.; Budhi, S.P.S.; Widyobroto, B.P.; Bachrudin, Z.

    1999-01-01

    The microbial protein supply to ruminants can be estimated based on the amount of purine derivatives (PD) excreted in the urine. Four experiments were conducted to evaluate the PD excretion method for Bali and Ongole cattle. In the first experiment, six male, two year old Bali cattle (Bos sondaicus) and six Ongole cattle (Bos indicus) of similar sex and age, were used to quantify the endogenous contribution to total PD excretion in the urine. In the second experiment, four cattle from each breed were used to examine the response of PD excretion to feed intake. 14 C-uric acid was injected in one single dose to define the partitioning ratio of renal:non-renal losses of plasma PD. The third experiment was conducted to examine the ratio of purine N:total N in mixed rumen microbial population. The fourth experiment measured the enzyme activities of blood, liver and intestinal tissues concerned with PD metabolism. The results of the first experiment showed that endogenous PD excretion was 145 ± 42.0 and 132 ± 20.0 μmol/kg W 0.75 /d, for Bali and Ongole cattle, respectively. The second experiment indicated that the proportion of plasma PD excreted in the urine of Bali and Ongole cattle was 0.78 and 0.77 respectively. Hence, the prediction of purine absorbed based on PD excretion can be stated as Y = 0.78 X + 0.145 W 0.75 and Y = 0.77 X + 0.132 W 0.75 for Bali and Ongole cattle, respectively. The third experiment showed that there were no differences in the ratio of purine N:total N in mixed rumen microbes of Bali and Ongole cattle (17% vs 18%). The last experiment, showed that intestinal xanthine oxidase activity of Bali cattle was lower than that of Ongole cattle (0.001 vs 0.015 μmol uric acid produced/min/g tissue) but xanthine oxidase activity in the blood and liver of Bali cattle was higher than that of Ongole cattle (3.48 vs 1.34 μmol/min/L plasma and 0.191 vs 0.131 μmol/min/g liver tissue). Thus, there was no difference in PD excretion between these two breeds

  1. Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.

    Science.gov (United States)

    Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke

    2013-04-01

    temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules

  2. Exploration of deep S-wave velocity structure using microtremor array technique to estimate long-period ground motion

    International Nuclear Information System (INIS)

    Sato, Hiroaki; Higashi, Sadanori; Sato, Kiyotaka

    2007-01-01

    In this study, microtremor array measurements were conducted at 9 sites in the Niigata plain to explore deep S-wave velocity structures for estimation of long-period earthquake ground motion. The 1D S-wave velocity profiles in the Niigata plain are characterized by 5 layers with S-wave velocities of 0.4, 0.8, 1.5, 2.1 and 3.0 km/s, respectively. The depth to the basement layer is deeper in the Niigata port area located at the Japan sea side of the Niigata plain. In this area, the basement depth is about 4.8 km around the Seirou town and about 4.1 km around the Niigata city, respectively. These features about the basement depth in the Niigata plain are consistent with the previous surveys. In order to verify the profiles derived from microtremor array exploration, we estimate the group velocities of Love wave for four propagation paths of long-period earthquake ground motion during Niigata-ken tyuetsu earthquake by multiple filter technique, which were compared with the theoretical ones calculated from the derived profiles. As a result, it was confirmed that the group velocities from the derived profiles were in good agreement with the ones from long-period earthquake ground motion records during Niigata-ken tyuetsu earthquake. Furthermore, we applied the estimation method of design basis earthquake input for seismically isolated nuclear power facilities by using normal mode solution to estimate long-period earthquake ground motion during Niigata-ken tyuetsu earthquake. As a result, it was demonstrated that the applicability of the above method for the estimation of long-period earthquake ground motion were improved by using the derived 1D S-wave velocity profile. (author)

  3. The suitability of EIT to estimate EELV in a clinical trial compared to oxygen wash-in/wash-out technique.

    Science.gov (United States)

    Karsten, Jan; Meier, Torsten; Iblher, Peter; Schindler, Angela; Paarmann, Hauke; Heinze, Hermann

    2014-02-01

    Open endotracheal suctioning procedure (OSP) and recruitment manoeuvre (RM) are known to induce severe alterations of end-expiratory lung volume (EELV). We hypothesised that EIT lung volumes lack clinical validity. We studied the suitability of EIT to estimate EELV compared to oxygen wash-in/wash-out technique. Fifty-four postoperative cardiac surgery patients were enrolled and received standardized ventilation and OSP. Patients were randomized into two groups receiving either RM after suctioning (group RM) or no RM (group NRM). Measurements were conducted at the following time points: Baseline (T1), after suctioning (T2), after RM or NRM (T3), and 15 and 30 min after T3 (T4 and T5). We measured EELV using the oxygen wash-in/wash-out technique (EELVO2) and computed EELV from EIT (EELVEIT) by the following formula: EELVEITTx,y…=EELVO2+ΔEELI×VT/ΔZ. EELVEIT values were compared with EELVO2 using Bland-Altman analysis and Pearson correlation. Limits of agreement ranged from -0.83 to 1.31 l. Pearson correlation revealed significant results. There was no significant impact of RM or NRM on EELVO2-EELVEIT relationship (p=0.21; p=0.23). During typical routine respiratory manoeuvres like endotracheal suctioning or alveolar recruitment, EELV cannot be estimated by EIT with reasonable accuracy.

  4. Estimates of error introduced when one-dimensional inverse heat transfer techniques are applied to multi-dimensional problems

    International Nuclear Information System (INIS)

    Lopez, C.; Koski, J.A.; Razani, A.

    2000-01-01

    A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively

  5. Application of PSO (particle swarm optimization) and GA (genetic algorithm) techniques on demand estimation of oil in Iran

    International Nuclear Information System (INIS)

    Assareh, E.; Behrang, M.A.; Assari, M.R.; Ghanbarzadeh, A.

    2010-01-01

    This paper presents application of PSO (Particle Swarm Optimization) and GA (Genetic Algorithm) techniques to estimate oil demand in Iran, based on socio-economic indicators. The models are developed in two forms (exponential and linear) and applied to forecast oil demand in Iran. PSO-DEM and GA-DEM (PSO and GA demand estimation models) are developed to estimate the future oil demand values based on population, GDP (gross domestic product), import and export data. Oil consumption in Iran from 1981 to 2005 is considered as the case of this study. The available data is partly used for finding the optimal, or near optimal values of the weighting parameters (1981-1999) and partly for testing the models (2000-2005). For the best results of GA, the average relative errors on testing data were 2.83% and 1.72% for GA-DEM exponential and GA-DEM linear , respectively. The corresponding values for PSO were 1.40% and 1.36% for PSO-DEM exponential and PSO-DEM linear , respectively. Oil demand in Iran is forecasted up to year 2030. (author)

  6. SU-C-207A-05: Feature Based Water Equivalent Path Length (WEPL) Determination for Proton Radiography by the Technique of Time Resolved Dose Measurement

    International Nuclear Information System (INIS)

    Zhang, R; Jee, K; Sharp, G; Flanz, J; Lu, H

    2016-01-01

    Purpose: Studies show that WEPL can be determined from modulated dose rate functions (DRF). However, the previous calibration method based on statistics of the DRF is sensitive to energy mixing of protons due to scattering through different materials (termed as range mixing here), causing inaccuracies in the determination of WEPL. This study intends to explore time-domain features of the DRF to reduce the effect of range mixing in proton radiography (pRG) by this technique. Methods: An amorphous silicon flat panel (PaxScan™ 4030CB, Varian Medical Systems, Inc., Palo Alto, CA) was placed behind phantoms to measure DRFs from a proton beam modulated by a specially designed modulator wheel. The performance of two methods, the previously used method based on the root mean square (RMS) and the new approach based on time-domain features of the DRF, are compared for retrieving WEPL and RSP from pRG of a Gammex phantom. Results: Calibration by T_8_0 (the time point for 80% of the major peak) was more robust to range mixing and produced WEPL with improved accuracy. The error of RSP was reduced from 8.2% to 1.7% for lung equivalent material, with the mean error for all other materials reduced from 1.2% to 0.7%. The mean error of the full width at half maximum (FWHM) of retrieved inserts was decreased from 25.85% to 5.89% for the RMS and T_8_0 method respectively. Monte Carlo simulations in simplified cases also demonstrated that the T_8_0 method is less sensitive to range mixing than the RMS method. Conclusion: WEPL images have been retrieved based on single flat panel measured DRFs, with inaccuracies reduced by exploiting time-domain features as the calibration parameter. The T_8_0 method is validated to be less sensitive to range mixing and can thus retrieve the WEPL values in proximity of interfaces with improved numerical and spatial accuracy for proton radiography.

  7. Comparison of techniques for estimating PAH bioavailability: Uptake in Eisenia fetida, passive samplers and leaching using various solvents and additives

    Energy Technology Data Exchange (ETDEWEB)

    Bergknut, Magnus [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden)]. E-mail: magnus.bergknut@chem.umu.se; Sehlin, Emma [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden); Lundstedt, Staffan [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden); Andersson, Patrik L. [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden); Haglund, Peter [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden); Tysklind, Mats [Department of Chemistry, Environmental Chemistry, Umeaa University, SE-90187 Umeaa (Sweden)

    2007-01-15

    The aim of this study was to evaluate different techniques for assessing the availability of polycyclic aromatic hydrocarbons (PAHs) in soil. This was done by comparing the amounts (total and relative) taken up by the earthworm Eisenia fetida with the amounts extracted by solid-phase microextraction (SPME), semi-permeable membrane devices (SPMDs), leaching with various solvent mixtures, leaching using additives, and sequential leaching. Bioconcentration factors of PAHs in the earthworms based on equilibrium partitioning theory resulted in poor correlations to observed values. This was most notable for PAHs with high concentrations in the studied soil. Evaluation by principal component analysis (PCA) showed distinct differences between the evaluated techniques and, generally, there were larger proportions of carcinogenic PAHs (4-6 fused rings) in the earthworms. These results suggest that it may be difficult to develop a chemical method that is capable of mimicking biological uptake, and thus estimating the bioavailability of PAHs. - The total and relative amounts of PAHs extracted by abiotic techniques for assessing the bioavailability of PAHs was found to differ from the amounts taken up by Eisenia fetida.

  8. Comparison of techniques for estimating PAH bioavailability: Uptake in Eisenia fetida, passive samplers and leaching using various solvents and additives

    International Nuclear Information System (INIS)

    Bergknut, Magnus; Sehlin, Emma; Lundstedt, Staffan; Andersson, Patrik L.; Haglund, Peter; Tysklind, Mats

    2007-01-01

    The aim of this study was to evaluate different techniques for assessing the availability of polycyclic aromatic hydrocarbons (PAHs) in soil. This was done by comparing the amounts (total and relative) taken up by the earthworm Eisenia fetida with the amounts extracted by solid-phase microextraction (SPME), semi-permeable membrane devices (SPMDs), leaching with various solvent mixtures, leaching using additives, and sequential leaching. Bioconcentration factors of PAHs in the earthworms based on equilibrium partitioning theory resulted in poor correlations to observed values. This was most notable for PAHs with high concentrations in the studied soil. Evaluation by principal component analysis (PCA) showed distinct differences between the evaluated techniques and, generally, there were larger proportions of carcinogenic PAHs (4-6 fused rings) in the earthworms. These results suggest that it may be difficult to develop a chemical method that is capable of mimicking biological uptake, and thus estimating the bioavailability of PAHs. - The total and relative amounts of PAHs extracted by abiotic techniques for assessing the bioavailability of PAHs was found to differ from the amounts taken up by Eisenia fetida

  9. Accuracy and feasibility of estimated tumour volumetry in primary gastric gastrointestinal stromal tumours: validation using semiautomated technique in 127 patients.

    Science.gov (United States)

    Tirumani, Sree Harsha; Shinagare, Atul B; O'Neill, Ailbhe C; Nishino, Mizuki; Rosenthal, Michael H; Ramaiya, Nikhil H

    2016-01-01

    To validate estimated tumour volumetry in primary gastric gastrointestinal stromal tumours (GISTs) using semiautomated volumetry. In this IRB-approved retrospective study, we measured the three longest diameters in x, y, z axes on CTs of primary gastric GISTs in 127 consecutive patients (52 women, 75 men, mean age 61 years) at our institute between 2000 and 2013. Segmented volumes (Vsegmented) were obtained using commercial software by two radiologists. Estimate volumes (V1-V6) were obtained using formulae for spheres and ellipsoids. Intra- and interobserver agreement of Vsegmented and agreement of V1-6 with Vsegmented were analysed with concordance correlation coefficients (CCC) and Bland-Altman plots. Median Vsegmented and V1-V6 were 75.9, 124.9, 111.6, 94.0, 94.4, 61.7 and 80.3 cm(3), respectively. There was strong intra- and interobserver agreement for Vsegmented. Agreement with Vsegmented was highest for V6 (scalene ellipsoid, x ≠ y ≠ z), with CCC of 0.96 [95 % CI 0.95-0.97]. Mean relative difference was smallest for V6 (0.6 %), while it was -19.1 % for V5, +14.5 % for V4, +17.9 % for V3, +32.6 % for V2 and +47 % for V1. Ellipsoidal approximations of volume using three measured axes may be used to closely estimate Vsegmented when semiautomated techniques are unavailable. Estimation of tumour volume in primary GIST using mathematical formulae is feasible. Gastric GISTs are rarely spherical. Segmented volumes are highly concordant with three axis-based scalene ellipsoid volumes. Ellipsoid volume can be used as an alternative for automated tumour volumetry.

  10. Risk estimation in association with diagnostic techniques in the nuclear medicine service of the Camaguey Ciego de Avila Territory

    International Nuclear Information System (INIS)

    Barrerras, C.A.; Brigido, F.O.; Naranjo, L.A.; Lasserra, S.O.; Hernandez Garcia, J.

    1999-01-01

    The nuclear medicine service at the Maria Curie Oncological Hospital, Camaguey, has experience of over three decades in using radiofarmaceutical imaging agents for diagnosis. Although the clinical risk associated with these techniques is negligible, it is necessary to evaluate the effective dose administered to the patient due to the introduction of radioactive substances into the body. The study of the dose administered to the patient provides useful data for evaluating the detriment associated with this medical practice, its subsequently optimization and consequently, for minimizing the stochastic effects on the patient. The aim of our paper is to study the collective effective dose administered by nuclear medicine service to Camaguey and Ciego de Avila population from 1995 to 1998 and the relative contribution to the total annual effective collective dose of the different diagnostic examinations. The studies were conducted on the basis of statistics from nuclear medicine examinations given to a population of 1102353 inhabitants since 1995. The results show that the nuclear medicine techniques of neck examinations with 1168.8 Sv man (1.11 Sv/expl), thyroid explorations with 119.6 Sv man (55.5 mSv/expl) and iodide uptake with 113.7 Sv man (14.0 mSv/expl) are the main techniques implicated in the relative contribution to the total annual effective collective dose of 1419.5 Sv man. The risk estimation in association with diagnostic techniques in the nuclear medicine service studied is globally low (total detriment: 103.6 as a result of 16232 explorations), similar to other published data

  11. Techniques for Estimating Emissions Factors from Forest Burning: ARCTAS and SEAC4RS Airborne Measurements Indicate which Fires Produce Ozone

    Science.gov (United States)

    Chatfield, Robert B.; Andreae, Meinrat O.

    2016-01-01

    Previous studies of emission factors from biomass burning are prone to large errors since they ignore the interplay of mixing and varying pre-fire background CO2 levels. Such complications severely affected our studies of 446 forest fire plume samples measured in the Western US by the science teams of NASA's SEAC4RS and ARCTAS airborne missions. Consequently we propose a Mixed Effects Regression Emission Technique (MERET) to check techniques like the Normalized Emission Ratio Method (NERM), where use of sequential observations cannot disentangle emissions and mixing. We also evaluate a simpler "consensus" technique. All techniques relate emissions to fuel burned using C(burn) = delta C(tot) added to the fire plume, where C(tot) approximately equals (CO2 = CO). Mixed-effects regression can estimate pre-fire background values of C(tot) (indexed by observation j) simultaneously with emissions factors indexed by individual species i, delta, epsilon lambda tau alpha-x(sub I)/C(sub burn))I,j. MERET and "consensus" require more than emissions indicators. Our studies excluded samples where exogenous CO or CH4 might have been fed into a fire plume, mimicking emission. We sought to let the data on 13 gases and particulate properties suggest clusters of variables and plume types, using non-negative matrix factorization (NMF). While samples were mixtures, the NMF unmixing suggested purer burn types. Particulate properties (b scant, b abs, SSA, AAE) and gas-phase emissions were interrelated. Finally, we sought a simple categorization useful for modeling ozone production in plumes. Two kinds of fires produced high ozone: those with large fuel nitrogen as evidenced by remnant CH3CN in the plumes, and also those from very intense large burns. Fire types with optimal ratios of delta-NOy/delta- HCHO associate with the highest additional ozone per unit Cburn, Perhaps these plumes exhibit limited NOx binding to reactive organics. Perhaps these plumes exhibit limited NOx binding to

  12. Menstrual cycle length: a surrogate measure of reproductive health capable of improving the accuracy of biochemical/sonographical ovarian reserve test in estimating the reproductive chances of women referred to ART.

    Science.gov (United States)

    Gizzo, Salvatore; Andrisani, Alessandra; Noventa, Marco; Quaranta, Michela; Esposito, Federica; Armanini, Decio; Gangemi, Michele; Nardelli, Giovanni B; Litta, Pietro; D'Antona, Donato; Ambrosini, Guido

    2015-04-10

    Aim of the study was to investigate whether menstrual cycle length may be considered as a surrogate measure of reproductive health, improving the accuracy of biochemical/sonographical ovarian reserve test in estimating the reproductive chances of women referred to ART. A retrospective-observational-study in Padua' public tertiary level Centre was conducted. A total of 455 normo-ovulatory infertile women scheduled for their first fresh non-donor IVF/ICSI treatment. The mean menstrual cycle length (MCL) during the preceding 6 months was calculated by physicians on the basis of information contained in our electronic database (first day of menstrual cycle collected every month by telephonic communication by single patients). We evaluated the relations between MCL, ovarian response to stimulation protocol, oocytes fertilization ratio, ovarian sensitivity index (OSI) and pregnancy rate in different cohorts of patients according to the class of age and the estimated ovarian reserve. In women younger than 35 years, MCL over 31 days may be associated with an increased risk of OHSS and with a good OSI. In women older than 35 years, and particularly than 40 years, MCL shortening may be considered as a marker of ovarian aging and may be associated with poor ovarian response, low OSI and reduced fertilization rate. When AMH serum value is lower than 1.1 ng/ml in patients older than 40 years, MCL may help Clinicians discriminate real from expected poor responders. Considering the pool of normoresponders, MCL was not correlated with pregnancy rate while a positive association was found with patients' age. MCL diary is more predictive than chronological age in estimating ovarian biological age and response to COH and it is more predictive than AMH in discriminating expected from real poor responders. In women older than 35 years MCL shortening may be considered as a marker of ovarian aging while chronological age remains most accurate parameter in predicting pregnancy.

  13. CEBAF Upgrade Bunch Length Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Ahmad, Mahmoud [Old Dominion Univ., Norfolk, VA (United States)

    2016-05-01

    Many accelerators use short electron bunches and measuring the bunch length is important for efficient operations. CEBAF needs a suitable bunch length because bunches that are too long will result in beam interruption to the halls due to excessive energy spread and beam loss. In this work, bunch length is measured by invasive and non-invasive techniques at different beam energies. Two new measurement techniques have been commissioned; a harmonic cavity showed good results compared to expectations from simulation, and a real time interferometer is commissioned and first checkouts were performed. Three other techniques were used for measurements and comparison purposes without modifying the old procedures. Two of them can be used when the beam is not compressed longitudinally while the other one, the synchrotron light monitor, can be used with compressed or uncompressed beam.

  14. The use of thermovision technique to estimate the properties of highly filled polyolefins composites with calcium carbonate

    Energy Technology Data Exchange (ETDEWEB)

    Jakubowska, Paulina; Klozinski, Arkadiusz [Poznan University of Technology, Institute of Technology and Chemical Engineering, Polymer Division Pl. M. Sklodowskiej-Curie 2, 60-965 Poznan, Poland, Paulina.Jakubowska@put.poznan.pl (Poland)

    2015-05-22

    The aim of this work was to determine the possibility of thermovision technique usage for estimating thermal properties of ternary highly filled composites (PE-MD/iPP/CaCO{sub 3}) and polymer blends (PE-MD/iPP) during mechanical measurements. The ternary, polyolefin based composites that contained the following amounts of calcium carbonate: 48, 56, and 64 wt % were studied. All materials were applying under tensile cyclic loads (x1, x5, x10, x20, x50, x100, x500, x1000). Simultaneously, a fully radiometric recording, using a TESTO infrared camera, was created. After the fatigue process, all samples were subjected to static tensile test and the maximum temperature at break was also recorded. The temperature values were analyzed in a function of cyclic loads and the filler content. The changes in the Young’s modulus values were also investigated.

  15. Stereological estimates of nuclear volume and other quantitative variables in supratentorial brain tumors. Practical technique and use in prognostic evaluation

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Braendgaard, H; Chistiansen, A O

    1991-01-01

    The use of morphometry and modern stereology in malignancy grading of brain tumors is only poorly investigated. The aim of this study was to present these quantitative methods. A retrospective feasibility study of 46 patients with supratentorial brain tumors was carried out to demonstrate...... the practical technique. The continuous variables were correlated with the subjective, qualitative WHO classification of brain tumors, and the prognostic value of the parameters was assessed. Well differentiated astrocytomas (n = 14) had smaller estimates of the volume-weighted mean nuclear volume and mean...... nuclear profile area, than those of anaplastic astrocytomas (n = 13) (2p = 3.1.10(-3) and 2p = 4.8.10(-3), respectively). No differences were seen between the latter type of tumor and glioblastomas (n = 19). The nuclear index was of the same magnitude in all three tumor types, whereas the mitotic index...

  16. Evaluation of the Repeatability of the Delta Q Duct Leakage Testing TechniqueIncluding Investigation of Robust Analysis Techniques and Estimates of Weather Induced Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Dickerhoff, Darryl; Walker, Iain

    2008-08-01

    The DeltaQ test is a method of estimating the air leakage from forced air duct systems. Developed primarily for residential and small commercial applications it uses the changes in blower door test results due to forced air system operation. Previous studies established the principles behind DeltaQ testing, but raised issues of precision of the test, particularly for leaky homes on windy days. Details of the measurement technique are available in an ASTM Standard (ASTM E1554-2007). In order to ease adoption of the test method, this study answers questions regarding the uncertainty due to changing weather during the test (particularly changes in wind speed) and the applicability to low leakage systems. The first question arises because the building envelope air flows and pressures used in the DeltaQ test are influenced by weather induced pressures. Variability in wind induced pressures rather than temperature difference induced pressures dominates this effect because the wind pressures change rapidly over the time period of a test. The second question needs to answered so that DeltaQ testing can be used in programs requiring or giving credit for tight ducts (e.g., California's Building Energy Code (CEC 2005)). DeltaQ modeling biases have been previously investigated in laboratory studies where there was no weather induced changes in envelope flows and pressures. Laboratory work by Andrews (2002) and Walker et al. (2004) found biases of about 0.5% of forced air system blower flow and individual test uncertainty of about 2% of forced air system blower flow. The laboratory tests were repeated by Walker and Dickerhoff (2006 and 2008) using a new ramping technique that continuously varied envelope pressures and air flows rather than taking data at pre-selected pressure stations (as used in ASTM E1554-2003 and other previous studies). The biases and individual test uncertainties for ramping were found to be very close (less than 0.5% of air handler flow) to those

  17. Telomere Length and Mortality

    DEFF Research Database (Denmark)

    Kimura, Masayuki; Hjelmborg, Jacob V B; Gardner, Jeffrey P

    2008-01-01

    Leukocyte telomere length, representing the mean length of all telomeres in leukocytes, is ostensibly a bioindicator of human aging. The authors hypothesized that shorter telomeres might forecast imminent mortality in elderly people better than leukocyte telomere length. They performed mortality...

  18. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    Science.gov (United States)

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-04-07

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  19. Development of electrical efficiency measurement techniques for 10 kW-class SOFC system: Part II. Uncertainty estimation

    International Nuclear Information System (INIS)

    Tanaka, Yohei; Momma, Akihiko; Kato, Ken; Negishi, Akira; Takano, Kiyonami; Nozaki, Ken; Kato, Tohru

    2009-01-01

    Uncertainty of electrical efficiency measurement was investigated for a 10 kW-class SOFC system using town gas. Uncertainty of heating value measured by the gas chromatography method on a mole base was estimated as ±0.12% at 95% level of confidence. Micro-gas chromatography with/without CH 4 quantification may be able to reduce uncertainty of measurement. Calibration and uncertainty estimation methods are proposed for flow-rate measurement of town gas with thermal mass-flow meters or controllers. By adequate calibrations for flowmeters, flow rate of town gas or natural gas at 35 standard litters per minute can be measured within relative uncertainty ±1.0% at 95 % level of confidence. Uncertainty of power measurement can be as low as ±0.14% when a precise wattmeter is used and calibrated properly. It is clarified that electrical efficiency for non-pressurized 10 kW-class SOFC systems can be measured within ±1.0% relative uncertainty at 95% level of confidence with the developed techniques when the SOFC systems are operated relatively stably

  20. Reliable and Damage-Free Estimation of Resistivity of ZnO Thin Films for Photovoltaic Applications Using Photoluminescence Technique

    Directory of Open Access Journals (Sweden)

    N. Poornima

    2013-01-01

    Full Text Available This work projects photoluminescence (PL as an alternative technique to estimate the order of resistivity of zinc oxide (ZnO thin films. ZnO thin films, deposited using chemical spray pyrolysis (CSP by varying the deposition parameters like solvent, spray rate, pH of precursor, and so forth, have been used for this study. Variation in the deposition conditions has tremendous impact on the luminescence properties as well as resistivity. Two emissions could be recorded for all samples—the near band edge emission (NBE at 380 nm and the deep level emission (DLE at ~500 nm which are competing in nature. It is observed that the ratio of intensities of DLE to NBE (/ can be reduced by controlling oxygen incorporation in the sample. - measurements indicate that restricting oxygen incorporation reduces resistivity considerably. Variation of / and resistivity for samples prepared under different deposition conditions is similar in nature. / was always less than resistivity by an order for all samples. Thus from PL measurements alone, the order of resistivity of the samples can be estimated.

  1. Spatial and temporal single-cell volume estimation by a fluorescence imaging technique with application to astrocytes in primary culture

    Science.gov (United States)

    Khatibi, Siamak; Allansson, Louise; Gustavsson, Tomas; Blomstrand, Fredrik; Hansson, Elisabeth; Olsson, Torsten

    1999-05-01

    Cell volume changes are often associated with important physiological and pathological processes in the cell. These changes may be the means by which the cell interacts with its surrounding. Astroglial cells change their volume and shape under several circumstances that affect the central nervous system. Following an incidence of brain damage, such as a stroke or a traumatic brain injury, one of the first events seen is swelling of the astroglial cells. In order to study this and other similar phenomena, it is desirable to develop technical instrumentation and analysis methods capable of detecting and characterizing dynamic cell shape changes in a quantitative and robust way. We have developed a technique to monitor and to quantify the spatial and temporal volume changes in a single cell in primary culture. The technique is based on two- and three-dimensional fluorescence imaging. The temporal information is obtained from a sequence of microscope images, which are analyzed in real time. The spatial data is collected in a sequence of images from the microscope, which is automatically focused up and down through the specimen. The analysis of spatial data is performed off-line and consists of photobleaching compensation, focus restoration, filtering, segmentation and spatial volume estimation.

  2. Validation of myocardial blood flow estimation with nitrogen-13 ammonia PET by the argon inert gas technique in humans

    International Nuclear Information System (INIS)

    Kotzerke, J.; Glatting, G.; Neumaier, B.; Reske, S.N.; Hoff, J. van den; Hoeher, M.; Woehrle, J. n

    2001-01-01

    We simultaneously determined global myocardial blood flow (MBF) by the argon inert gas technique and by nitrogen-13 ammonia positron emission tomography (PET) to validate PET-derived MBF values in humans. A total of 19 patients were investigated at rest (n=19) and during adenosine-induced hyperaemia (n=16). Regional coronary artery stenoses were ruled out by angiography. The argon inert gas method uses the difference of arterial and coronary sinus argon concentrations during inhalation of a mixture of 75% argon and 25% oxygen to estimate global MBF. It can be considered as valid as the microspheres technique, which, however, cannot be applied in humans. Dynamic PET was performed after injection of 0.8±0.2 GBq 13 N-ammonia and MBF was calculated applying a two-tissue compartment model. MBF values derived from the argon method at rest and during the hyperaemic state were 1.03±0.24 ml min -1 g -1 and 2.64±1.02 ml min -1 g -1 , respectively. MBF values derived from ammonia PET at rest and during hyperaemia were 0.95±0.23 ml min -1 g -1 and 2.44±0.81 ml min -1 g -1 , respectively. The correlation between the two methods was close (y=0.92x+0.14, r=0.96; P 13 N-ammonia PET. (orig.)

  3. Estimating distribution and connectivity of recolonizing American marten in the northeastern United States using expert elicitation techniques

    Science.gov (United States)

    Aylward, C.M.; Murdoch, J.D.; Donovan, Therese M.; Kilpatrick, C.W.; Bernier, C.; Katz, J.

    2018-01-01

    The American marten Martes americana is a species of conservation concern in the northeastern United States due to widespread declines from over‐harvesting and habitat loss. Little information exists on current marten distribution and how landscape characteristics shape patterns of occupancy across the region, which could help develop effective recovery strategies. The rarity of marten and lack of historical distribution records are also problematic for region‐wide conservation planning. Expert opinion can provide a source of information for estimating species–landscape relationships and is especially useful when empirical data are sparse. We created a survey to elicit expert opinion and build a model that describes marten occupancy in the northeastern United States as a function of landscape conditions. We elicited opinions from 18 marten experts that included wildlife managers, trappers and researchers. Each expert estimated occupancy probability at 30 sites in their geographic region of expertise. We, then, fit the response data with a set of 58 models that incorporated the effects of covariates related to forest characteristics, climate, anthropogenic impacts and competition at two spatial scales (1.5 and 5 km radii), and used model selection techniques to determine the best model in the set. Three top models had strong empirical support, which we model averaged based on AIC weights. The final model included effects of five covariates at the 5‐km scale: percent canopy cover (positive), percent spruce‐fir land cover (positive), winter temperature (negative), elevation (positive) and road density (negative). A receiver operating characteristic curve indicated that the model performed well based on recent occurrence records. We mapped distribution across the region and used circuit theory to estimate movement corridors between isolated core populations. The results demonstrate the effectiveness of expert‐opinion data at modeling occupancy for rare

  4. An automated technique to stage lower third molar development on panoramic radiographs for age estimation: a pilot study.

    Science.gov (United States)

    De Tobel, J; Radesh, P; Vandermeulen, D; Thevissen, P W

    2017-12-01

    Automated methods to evaluate growth of hand and wrist bones on radiographs and magnetic resonance imaging have been developed. They can be applied to estimate age in children and subadults. Automated methods require the software to (1) recognise the region of interest in the image(s), (2) evaluate the degree of development and (3) correlate this to the age of the subject based on a reference population. For age estimation based on third molars an automated method for step (1) has been presented for 3D magnetic resonance imaging and is currently being optimised (Unterpirker et al. 2015). To develop an automated method for step (2) based on lower third molars on panoramic radiographs. A modified Demirjian staging technique including ten developmental stages was developed. Twenty panoramic radiographs per stage per gender were retrospectively selected for FDI element 38. Two observers decided in consensus about the stages. When necessary, a third observer acted as a referee to establish the reference stage for the considered third molar. This set of radiographs was used as training data for machine learning algorithms for automated staging. First, image contrast settings were optimised to evaluate the third molar of interest and a rectangular bounding box was placed around it in a standardised way using Adobe Photoshop CC 2017 software. This bounding box indicated the region of interest for the next step. Second, several machine learning algorithms available in MATLAB R2017a software were applied for automated stage recognition. Third, the classification performance was evaluated in a 5-fold cross-validation scenario, using different validation metrics (accuracy, Rank-N recognition rate, mean absolute difference, linear kappa coefficient). Transfer Learning as a type of Deep Learning Convolutional Neural Network approach outperformed all other tested approaches. Mean accuracy equalled 0.51, mean absolute difference was 0.6 stages and mean linearly weighted kappa was

  5. Evaluation of the 137Cs technique for estimating wind erosion losses for some sandy Western Australian soils

    International Nuclear Information System (INIS)

    Harper, R.J.; Gilkes, R.J.

    1994-01-01

    The utility of the caesium-137 technique, for estimating the effects of wind erosion, was evaluated on the soils of a semi-arid agricultural area near Jerramungup, Western Australia. The past incidence of wind erosion was estimated from field observations of soil profile morphology and an existing remote sensing study. Erosion was limited to sandy surfaced soils (0-4% clay), with a highly significant difference (P 137 Cs values between eroded and non-eroded sandy soils, with mean values of 243±17 and 386±13 Bq m -2 respectively. Non-eroded soils, with larger clay contents, had a mean 137 Cs content of 421±26 Bq m -2 , however, due to considerable variation between replicate samples, this value was not significantly different from that of the non-eroded sands. Hence, although the technique discriminates between eroded and non-eroded areas, the large variation in 137 Cs values means that from 27 to 96 replicate samples are required to provide statistically valid estimates of 137 Cs loss. The occurrence of around 18% of the total 137 Cs between 10 and 20 cm depth in these soils, despite cultivation being confined to the surface 9 cm, suggests that leaching of 137 Cs occurs in the sandy soils, although there was no relationship between clay content and 137 Cs value for either eroded or non-eroded soils. In a multiple linear regression, organic carbon content and the mean grain size of the eroded soils explained 35% of the variation in 137 Cs content. This relationship suggests that both organic carbon and 137 Cs are removed by erosion, with erosion being more prevalent on soils with a finer sand fraction. Clay and silt contents do not vary with depth in the near-surface horizons of the eroded sandy soils, hence it is likely that wind erosion strips the entire surface horizon with its 137 Cs content, rather than selectively winnowing fine material. 71 refs., 6 tabs., 2 fig

  6. [Estimating child mortality using the previous child technique, with data from health centers and household surveys: methodological aspects].

    Science.gov (United States)

    Aguirre, A; Hill, A G

    1988-01-01

    2 trials of the previous child or preceding birth technique in Bamako, Mali, and Lima, Peru, gave very promising results for measurement of infant and early child mortality using data on survivorship of the 2 most recent births. In the Peruvian study, another technique was tested in which each woman was asked about her last 3 births. The preceding birth technique described by Brass and Macrae has rapidly been adopted as a simple means of estimating recent trends in early childhood mortality. The questions formulated and the analysis of results are direct when the mothers are visited at the time of birth or soon after. Several technical aspects of the method believed to introduce unforeseen biases have now been studied and found to be relatively unimportant. But the problems arising when the data come from a nonrepresentative fraction of the total fertile-aged population have not been resolved. The analysis based on data from 5 maternity centers including 1 hospital in Bamako, Mali, indicated some practical problems and the information obtained showed the kinds of subtle biases that can result from the effects of selection. The study in Lima tested 2 abbreviated methods for obtaining recent early childhood mortality estimates in countries with deficient vital registration. The basic idea was that a few simple questions added to household surveys on immunization or diarrheal disease control for example could produce improved child mortality estimates. The mortality estimates in Peru were based on 2 distinct sources of information in the questionnaire. All women were asked their total number of live born children and the number still alive at the time of the interview. The proportion of deaths was converted into a measure of child survival using a life table. Then each woman was asked for a brief history of the 3 most recent live births. Dates of birth and death were noted in month and year of occurrence. The interviews took only slightly longer than the basic survey

  7. A comparative analysis of spectral exponent estimation techniques for 1/fβ processes with applications to the analysis of stride interval time series

    Science.gov (United States)

    Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin

    2013-01-01

    Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509

  8. Measuring Crack Length in Coarse Grain Ceramics

    Science.gov (United States)

    Salem, Jonathan A.; Ghosn, Louis J.

    2010-01-01

    Due to a coarse grain structure, crack lengths in precracked spinel specimens could not be measured optically, so the crack lengths and fracture toughness were estimated by strain gage measurements. An expression was developed via finite element analysis to correlate the measured strain with crack length in four-point flexure. The fracture toughness estimated by the strain gaged samples and another standardized method were in agreement.

  9. Estimation of water quality parameters applying satellite data fusion and mining techniques in the lake Albufera de Valencia (Spain)

    Science.gov (United States)

    Doña, Carolina; Chang, Ni-Bin; Vannah, Benjamin W.; Sánchez, Juan Manuel; Delegido, Jesús; Camacho, Antonio; Caselles, Vicente

    2014-05-01

    Linked to the enforcement of the European Water Framework Directive (2000) (WFD), which establishes that all countries of the European Union have to avoid deterioration, improve and retrieve the status of the water bodies, and maintain their good ecological status, several remote sensing studies have been carried out to monitor and understand the water quality variables trend. Lake Albufera de Valencia (Spain) is a hypereutrophic system that can present chrorophyll a concentrations over 200 mg·m-3 and transparency (Secchi disk) values below 20 cm, needing to retrieve and improve its water quality. The principal aim of our work was to develop algorithms to estimate water quality parameters such as chlorophyll a concentration and water transparency, which are informative of the eutrophication and ecological status, using remote sensing data. Remote sensing data from Terra/MODIS, Landsat 5-TM and Landsat 7-ETM+ images were used to carry out this study. Landsat images are useful to analyze the spatial variability of the water quality variables, as well as to monitor small to medium size water bodies due to its 30-m spatial resolution. But, the poor temporal resolution of Landsat, with a 16-day revisit time, is an issue. In this work we tried to solve this data gap by applying fusion techniques between Landsat and MODIS images. Although the lower spatial resolution of MODIS is 250/500-m, one image per day is available. Thus, synthetic Landsat images were created using data fusion for no data acquisition dates. Good correlation values were obtained when comparing original and synthetic Landsat images. Genetic programming was used to develop models for predicting water quality. Using the reflectance bands of the synthetic Landsat images as inputs to the model, values of R2 = 0.94 and RMSE = 8 mg·m-3 were obtained when comparing modeled and observed values of chlorophyll a, and values of R2= 0.91 and RMSE = 4 cm for the transparency (Secchi disk). Finally, concentration

  10. Estimating photometric redshifts for X-ray sources in the X-ATLAS field using machine-learning techniques

    Science.gov (United States)

    Mountrichas, G.; Corral, A.; Masoura, V. A.; Georgantopoulos, I.; Ruiz, A.; Georgakakis, A.; Carrera, F. J.; Fotopoulou, S.

    2017-12-01

    We present photometric redshifts for 1031 X-ray sources in the X-ATLAS field using the machine-learning technique TPZ. X-ATLAS covers 7.1 deg2 observed with XMM-Newton within the Science Demonstration Phase of the H-ATLAS field, making it one of the largest contiguous areas of the sky with both XMM-Newton and Herschel coverage. All of the sources have available SDSS photometry, while 810 additionally have mid-IR and/or near-IR photometry. A spectroscopic sample of 5157 sources primarily in the XMM/XXL field, but also from several X-ray surveys and the SDSS DR13 redshift catalogue, was used to train the algorithm. Our analysis reveals that the algorithm performs best when the sources are split, based on their optical morphology, into point-like and extended sources. Optical photometry alone is not enough to estimate accurate photometric redshifts, but the results greatly improve when at least mid-IR photometry is added in the training process. In particular, our measurements show that the estimated photometric redshifts for the X-ray sources of the training sample have a normalized absolute median deviation, nmad ≈ 0.06, and a percentage of outliers, η = 10-14%, depending upon whether the sources are extended or point like. Our final catalogue contains photometric redshifts for 933 out of the 1031 X-ray sources with a median redshift of 0.9. The table of the photometric redshifts is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A39

  11. Bowen ratio/energy balance technique for estimating crop net CO2 assimilation, and comparison with a canopy chamber

    Science.gov (United States)

    Held, A. A.; Steduto, P.; Orgaz, F.; Matista, A.; Hsiao, T. C.

    1990-12-01

    This paper describes a Bowen ratio/energy balance (BREB) system which, in conjunction with an infra-red gas analyzer (IRGA), is referred to as BREB+ and is used to estimate evapotranspiration ( ET) and net CO2 flux ( NCF) over crop canopies. The system is composed of a net radiometer, soil heat flux plates, two psychrometers based on platinum resistance thermometers (PRT), bridge circuits to measure resistances, an IRGA, air pumps and switching valves, and a data logger. The psychrometers are triple shielded and aspirated, and with aspiration also between the two inner shields. High resistance (1 000 ohm) PRT's are used for dry and wet bulbs to minimize errors due to wiring and connector resistances. A high (55 K ohm) fixed resistance serves as one arm of the resistance bridge to ensure linearity in output signals. To minimize gaps in data, to allow measurements at short (e.g., 5 min) intervals, and to simplify operation, the psychrometers were fixed at their upper and lower position over the crop and not alternated. Instead, the PRT's, connected to the bridge circuit and the data logger, were carefully calibrated together. Field tests using a common air source showed appartent effects of the local environment around each psychrometer on the temperatures measured. ET rates estimated with the BREB system were compared to those measured with large lysimeters. Daily totals agreed within 5%. There was a tendency, however, for the lysimeter measurements to lag behind the BREB measurements. Daily patterns of NCF estimated with the BREB+ system are consistent with expectations from theories and data in the literature. Side-by-side comparisons with a stirred Mylar canopy chamber showed similar NCF patterns. On the other hand, discrepancies between the results of the two methods were quite marked in the morning or afternoon on certain dates. Part of the discrepancies may be attributed to inaccuracies in the psychrometric temperature measurements. Other possible causes

  12. Localised estimates and spatial mapping of poverty incidence in the state of Bihar in India-An application of small area estimation techniques.

    Science.gov (United States)

    Chandra, Hukum; Aditya, Kaustav; Sud, U C

    2018-01-01

    Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011-12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable.

  13. Localised estimates and spatial mapping of poverty incidence in the state of Bihar in India—An application of small area estimation techniques

    Science.gov (United States)

    Aditya, Kaustav; Sud, U. C.

    2018-01-01

    Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011–12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable. PMID:29879202

  14. Effect of gadolinium on hepatic fat quantification using multi-echo reconstruction technique with T2* correction and estimation

    Energy Technology Data Exchange (ETDEWEB)

    Ge, Mingmei; Wu, Bing; Liu, Zhiqin; Song, Hai; Meng, Xiangfeng; Wu, Xinhuai [The Military General Hospital of Beijing PLA, Department of Radiology, Beijing (China); Zhang, Jing [The 309th Hospital of Chinese People' s Liberation Army, Department of Radiology, Beijing (China)

    2016-06-15

    To determine whether hepatic fat quantification is affected by administration of gadolinium using a multiecho reconstruction technique with T2* correction and estimation. Forty-eight patients underwent the investigational sequence for hepatic fat quantification at 3.0T MRI once before and twice after administration of gadopentetate dimeglumine (0.1 mmol/kg). A one-way repeated-measures analysis of variance with pairwise comparisons was conducted to evaluate the systematic bias of fat fraction (FF) and R2* measurements between three acquisitions. Bland-Altman plots were used to assess the agreements between pre- and post-contrast FF measurements in the liver. A P value <0.05 indicated statistically significant difference. FF measurements of liver, spleen and spine revealed no significant systematic bias between the three measurements (P > 0.05 for all). Good agreements (95 % confidence interval) of FF measurements were demonstrated between pre-contrast and post-contrast1 (-0.49 %, 0.52 %) and post-contrast2 (-0.83 %, 0.77 %). R2* increased in liver and spleen (P = 0.039, P = 0.01) after administration of gadolinium. Although under the impact of an increased R2* in liver and spleen post-contrast, the investigational sequence can still obtain stable fat quantification. Therefore, it could be applied post-contrast to substantially increase the efficiency of MR examination and also provide a backup for the occasional failure of FF measurements pre-contrast. (orig.)

  15. Effect of gadolinium on hepatic fat quantification using multi-echo reconstruction technique with T2* correction and estimation

    International Nuclear Information System (INIS)

    Ge, Mingmei; Wu, Bing; Liu, Zhiqin; Song, Hai; Meng, Xiangfeng; Wu, Xinhuai; Zhang, Jing

    2016-01-01

    To determine whether hepatic fat quantification is affected by administration of gadolinium using a multiecho reconstruction technique with T2* correction and estimation. Forty-eight patients underwent the investigational sequence for hepatic fat quantification at 3.0T MRI once before and twice after administration of gadopentetate dimeglumine (0.1 mmol/kg). A one-way repeated-measures